3 Things You Should Know About DVJU Files


The average computer may come across files on his computer with various file extensions. These are the three and four letters that come at the end of a file name. For example, something named myfile.pdf is something in PDF format. Users and a computer's operating system can identify the type of file by its extension. You may come across some that have the file extension DJVU. Here are three things that you should know about these types of files.

What Does DJVU Mean?

The file extension DJVU indicates that you have a DJVU file. It is an image which may be a photograph or some sort of document. The document could be digital or it could have been scanned. DJVU is a form of image compression technology that was developed by AT & T. It allows for the distribution of very clear, high-resolution images on the Internet. Images of all sorts can be placed online and will be of the highest quality.

Programs That Will Open These Files

There are a variety of programs that are able to open these types of images. For Windows, there is WinDjView, DjVuLibre DjView, ACD Systems Canvas 14, and ACD Systems ACDSee 15. For those using the Mac operating system, MacDjView will open these as will DjVuLibre DjView, and SST DjVuReader. Linux users can use DjVuLibre DjView and KDE Okular to open DJVU images.

What If They Will Not Open?

From time to time, you may come across a DJVU image that will not open. Sometimes it is because it is corrupted and will not open. No matter how many adjustments you make or what tricks you try, it will not open. In that case, the most likely problem is a corruption. When that happens, you will need to find a different version and download it.

Another common problem is not having the right version of the application to open the file. While it appears that you may be able to open it, it will not. In that case, you need to download any updates to make sure the version of your application is correct. If you have downloaded the latest version and it still does not work, you may have a different problem. The problem would be with your operating system. It may not know which program to use to open the DJVU file. The problem can be rectified rather easily. You will manually tell the computer which program to use.


Source by Viktoria Carella

The Evolution of Python Language Over the Years


According to several websites, Python is one of the most popular coding languages ​​of 2015. Along with being a high-level and general-purpose programming language, Python is also object-oriented and open source. At the same time, a good number of developers across the world have been making use of Python to create GUI applications, websites and mobile apps. The differentiating factor that Python brings to the table is that it enables programmers to flesh out concepts by writing less and readable code. The developers can further take advantage of several Python frameworks to mitigate the time and effort required for building large and complex software applications.

The programming language is currently being used by a number of high-traffic websites including Google, Yahoo Groups, Yahoo Maps, Linux Weekly News, Shopzilla and Web Therapy. Likewise, Python also finds great use for creating gaming, financial, scientific and educational applications. However, developers still use different versions of the programming language. According to the usage statistics and market share data of Python posted on W3techs, currently Python 2 is being used by 99.4% of websites, whereas Python 3 is being used only by 0.6% of websites. That is why, it becomes essential for each programmer to understand different versions of Python, and its evolution over many years.

How Python Has Been Evolving over the Years?

Conceived as a Hobby Programming Project

Despite being one of the most popular coding languages ​​of 2015, Python was originally conceived by Guido van Rossum as a hobby project in December 1989. As Van Rossum's office remained closed during Christmas, he was looking for a hobby project that will keep him occupied during the holidays. He planned to create an interpreter for a new scripting language, and named the project as Python. Thus, Python was originally designed as a successor to ABC programming language. After writing the interpreter, Van Rossum made the code public in February 1991. However, at present the open source programming language is being managed by the Python Software Foundation.

Version 1 of Python

Python 1.0 was released in January 1994. The major release included a number of new features and functional programming tools including lambda, filter, map and reduce. The version 1.4 was released with several new features like keyword arguments, built-in support for complex numbers, and a basic form of data hiding. The major release was followed by two minor releases, version 1.5 in December 1997 and version 1.6 in September 2000. The version 1 of Python lacked the features offered by popular programming languages ​​of the time. But the initial versions created a solid foundation for development of a powerful and futuristic programming language.

Version 2 of Python

In October 2000, Python 2.0 was released with the new list comprehension feature and a garbage collection system. The syntax for the list comprehension feature was inspired by other functional programming languages ​​like Haskell. But Python 2.0, unlike Haskell, gave preference to alphabetic keywords over punctuation characters. Also, the garbage collection system effectuated collection of reference cycles. The major release was followed by several minor releases. These releases added a number of functionality to the programming language like support for nested scopes, and unification of Python's classes and types into a single hierarchy. The Python Software Foundation has already announced that there would be no Python 2.8. However, the Foundation will provide support to version 2.7 of the programming language till 2020.

Version 3 of Python

Python 3.0 was released in December 2008. It came with a several new features and enhancements, along with a number of deprecated features. The deprecated features and backward incompatibility make version 3 of Python completely different from earlier versions. So many developers still use Python 2.6 or 2.7 to avail the features deprecated from last major release. However, the new features of Python 3 made it more modern and popular. Many developers even switched to version 3.0 of the programming language to avail these awesome features.

Python 3.0 replaced print statement with the built-in print () function, while allowing programmers to use custom separator between lines. Likewise, it simplified the rules of ordering comparison. If the operands are not organized in a natural and meaningful order, the ordering comparison operators can now raise a TypeError exception. The version 3 of the programming language further uses text and data instead of Unicode and 8-bit strings. While treating all code as Unicode by default it represents binary data as encoded Unicode.

As Python 3 is backward incompatible, the programmers can not access features like string exceptions, old-style classes, and implicit relative imports. Also, the developers must be familiar with changes made to syntax and APIs. They can use a tool called "2to3" to migrate their application from Python 2 to 3 smoothly. The tool highlights incompatibility and areas of concern through comments and warnings. The comments help programmers to make changes to the code, and upgrade their existing applications to the latest version of programming language.

Latest Versions of Python

At present, programmers can choose either version 3.4.3 or 2.7.10 of Python. Python 2.7 enables developers to avail improved numeric handling and enhancements for standard library. The version further makes it easier for developers to migrate to Python 3. On the other hand, Python 3.4 comes with several new features and library modules, security improvements and CPython implementation improvements. However, a number of features are deprecated in both Python API and programming language. The developers can still use Python 3.4 to avail support in the longer run.

Version 4 of Python

Python 4.0 is expected to be available in 2023 after the release of Python 3.9. It will come with features that will help programmers to switch from version 3 to 4 seamlessly. Also, as they gain experience, the expert Python developers can take advantage of a number of backward compatible features to modernize their existing applications without putting any extra time and effort. However, the developers still have to wait many years to get a clear picture of Python 4.0. However, they must monitor the latest releases to easily migrate to the version 4.0 of the popular coding language.

The version 2 and version 3 of Python are completely different from each other. So each programmer must understand the features of these distinct versions, and compare their functionality based on specific needs of the project. Also, he needs to check the version of Python that each framework supports. However, each developer must take advantage of the latest version of Python to avail new features and long-term support.

Harri has an avid interest in Python and loves to blog interesting stuff about the technology. The recently wrote an He , interesting blog on the Python Http://www.allaboutweb.biz/category/python/ .


Source by Harri Srivastav

Logging for the PCI DSS – How to Gather Server and Firewall Audit Trails for PCI DSS Requirement 10


PCI DSS Requirement 10 calls for a full audit trail of all activity for all devices and users, and specifically requires all event and audit logs to be gathered centrally and securely backed up. The thinking here is twofold.

Firstly, as a pro-active security measure, the PCI DSS requires all logs to be reviewed on a daily basis (yes – you did read that correctly – Review ALL logs DAILY – we shall return to this potentially overwhelming burden later …) requires the Security Team to become more intimate with the daily 'business as usual' workings of the network. This way, when a genuine security threat arises, it will be more easily detected through unusual events and activity patterns.

The second driver for logging all activity is to give a 'black box' recorded audit trail so that if a cyber crime is committed, a forensic analysis of the activity surrounding the security incident can be conducted. At best, the perpetrator and the extent of their wrongdoing can be identified and remediated. At worst – lessons can be learned from the attack so that processes and / or technological security defenses can be improved. Of course, if you are a PCI Merchant reading this, then your main driver is that this is a mandatory PCI DSS requirement – so we should get moving!

Which Devices are within scope of PCI Requirement 10? Same answer as to which devices are within scope of the PCI DSS as a whole – anything involved with handling or with access to card data is within scope and we there for need to capture an audit trail from each of them. The most critical devices are the firewall, servers with settlement or transaction files and any Domain Controller for the PCI Estate, although all 'in scope' devices must be covered without exception.

How do we get Event Logs from ' in scope' PCI devices?

We'll take them in turn –

How do I get PCI Event Logs from Firewalls? – The exact command set varies between manufacturers and firewall versions but you will need to enable 'logging' via either the Firewall Web interface or the Command Line. Taking a typical example – a Cisco ASA – the CLI command sequence is as follows logging on no logging console no logging monitor logging abcd (where abcd is the address of your syslog server) logging trap informational This will make sure all 'Informational' level and above messages are forwarded to the syslog server and guarantee all logon and log off events are captured.

How do I get PCI Audit Trails from Windows Servers and EPoS / Tills? – There are a few more steps required for Windows Servers and PCs / EPoS devices. First of all it is necessary to make sure that logon and logoff events, privilege use, policy change and, depending on your application and how card data is handled, object access. Use the Local Security Policy You may also wish to enable System Event logging if you want to use your SIEM system to help troubleshoot and pre-empt system problems eg a failing disk can be preempted before complete failure by spotting disk errors. Typically we will need Success and Failure to be logged for each Event –

  • Account Logon Events- Success and Failure
  • Account Management Events- Success and Failure
  • Directory Service Access Events- Failure
  • Logon Events- Success and Failure
  • Object Access Events- Success and Failure
  • Policy Change Events- Success and Failure
  • Privilege Use Events- Failure
  • Process Tracking- No Auditing
  • System Events- Success and Failure

* Directory Service Access Events available on a Domain Controller only

** Object Access – Used in conjunction with Folder and File Auditing. Auditing Failures reveals attempted access to forbidden secure objects which may be an attempted security breach. Auditing Success is used to give an Audit Trail of all access to secured date, such as, card data in a settlement / transaction file / folder.

*** Process Tracking – not recommended as this will generate a large number of events. Better to use a specialized whitelisting / blacklisting technology l

**** System Events – Not required for PCI DSS compliance but often used to provided extra 'added value' from a PCI DSS initiative, providing early warning signs of problems with hardware and so pre-empt system failures. Once events are being audited, they then need to be relayed back to your central syslog server. A Windows Syslog agent program will automatically bind into the Windows Event logs and send all events via syslog. The added benefit of an agent like this is that events can be formatted into standard syslog severity and facility codes and also pre-filtered. It is vital that events are forwarded to the secure syslog server in real-time to ensure they are backed up before there is any opportunity to clear the local server event log.

Unix / Linux Servers – Enable logging using the syslogd daemon which is a standard part of all UNIX and Linux Operating Systems such as Red Hat Enterprise Linux, CentOS and Ubuntu. Edit the /etc/syslog.conf file and enter details of the syslog server.

For example, append the following line to the /etc/syslog.conf file

*. * @ (Abcd)

Or if using Solaris or other System 5-type UNIX

* .debug @abcd

* .info @ Abcd

* .notice @ Abcd

* .warning @ Abcd

* .err @ Abcd

* .crit @ Abcd

* .alert @ Abcd

* .emerg @ Abcd

Where abcd is the IP address of the targeted syslog server.

If you need to collect logs from a third-party application eg Oracle, then you may need to use specialized Unix Syslog agent which allows third-party log files to be relayed via syslog.

Other Network Devices Routers and Switches within the scope of PCI DSS will also need to be configured to send events via syslog. As was detailed for firewalls earlier, syslog is an almost universally supported function for all network devices and appliances. However, in the rare case that syslog is not supported, SNMP traps can be used provided the syslog server being used can receive and interpret SNMP traps.

PCI DSS Requirement 10.6 "Review logs for all system components at least daily" We have now covered how to get the right logs from all devices within scope of the PCI DSS but this is often the simpler part of handling Requirement 10. The aspect of Requirement 10 which often concerns PCI Merchants the most is the extra workload they expect by now being responsible for analyzing and understanding a potentially huge volume of logs. There is often a 'out of sight, out of mind' philosophy, or a 'if we can not see the logs, then we can not be responsible for reviewing them' mindset, since if logs are made visible and placed on the screen in front of the Merchant, there is no longer any excuse for ignoring them.

Tellingly, although the PCI DSS avoids being prescriptive about how to deliver against the 12 requirements, Requirement 10 specifically details "Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6". In practice it would be an extremely manpower-intensive task to review all event logs in even a small-scale environment and an automated means of analyzing logs is essential.

However, when implemented correctly, this will become so much more than simply a tool to help you cope with the inconvenient burden of the PCI DSS. An intelligent Security Information and Event Management system will be hugely beneficial to all troubleshooting and problem investigation tasks. Such a system will allow potential problems to be identified and fixed before they affect business operations. From a security standpoint, by enabling you to become 'intimate' with the normal workings of your systems, you are then well-placed to spot truly unusual and potentially significant security incidents.

More information go The For to Http://www.newnettechnologies.com

All material is copyright New Net Technologies Ltd.


Source by Mark Kedgley

The History of CRM – Moving Beyond the Customer Database


Customer Relationship Management (CRM) is one of those magnificent concepts
that swept the business world in the 1990's with the promise of forever changing
the way businesses small and large interacted with their customer bases. In the
short term, however, it proved to be an unwieldy process that was better in
theory than in practice for a variety of reasons. First among these was that it
was simply so difficult and expensive to track and keep the high volume of
records needed accurately and constantly update them.
In the last several years, however, newer software systems and advanced
tracking features have vastly improved CRM capabilities and the real promise of
CRM is becoming a reality. As the price of newer, more customizable Internet
solutions have hit the marketplace; competition has driven the prices down so
that even relatively small businesses are reaping the benefits of some custom
CRM programs.
In the beginning …
The 1980's saw the emergence of database marketing, which was simply a catch
phrase to define the practice of setting up customer service groups to speak
individually to all of a company's customers.
In the case of larger, key clients it was a valuable tool for keeping the
lines of communication open and tailoring service to the clients needs. In the
case of smaller clients, however, it tended to provide repetitive, survey-like
information that cluttered databases and did not provide much insight. As
companies began tracking database information, they realized that the bare bones
were all that was needed in most cases: what they buy regularly, what they
spend, what they do.
Advances in the 1990's
In the 1990's companies began to improve on Customer Relationship Management
by making it more of a two-way street. Instead of simply gathering data for
their own use, they began giving back to their customers not only in terms of
the obvious goal of improved customer service, but in incentives, gifts and
other perks for customer loyalty.
This was the beginning of the now familiar frequent flyer programs, bonus
points on credit cards and a host of other resources that are based on CRM
tracking of customer activity and spending patterns. CRM was now being used as a
way to increase sales passively as well as through active improvement of
customer service.
True CRM comes of age
Real Customer Relationship Management as it's thought of today really began
in earnest in the early years of this century. As software companies began
releasing newer, more advanced solutions that were customizable across
industries, it became feasible to really use the information in a dynamic way.

Instead of feeding information into a static database for future reference,
CRM became a way to continuously update understanding of customer needs and
behavior. Branching of information, sub-folders, and custom tailored features
enabled companies to break down information into smaller subsets so that they
could evaluate not only concrete statistics, but information on the motivation
and reactions of customers.
The Internet provided a huge boon to the development of these huge databases
by enabling offsite information storage. Where before companies had difficulty
supporting the enormous amounts of information, the Internet provided new
possibilities and CRM took off as providers began moving toward Internet
With the increased fluidity of these programs came a less rigid relationship
between sales, customer service and marketing. CRM enabled the development of
new strategies for more cooperative work between these different divisions
through shared information and understanding, leading to increased customer
satisfaction from order to end product.
Today, CRM is still utilized most frequently by companies that rely heavily
on two distinct features: customer service or technology. The three sectors of
business that rely most heavily on CRM – and use it to great advantage – are
financial services, a variety of high tech corporations and the
telecommunications industry.
The financial services industry in particular tracks the level of client
satisfaction and what customers are looking for in terms of changes and
personalized features. They also track changes in investment habits and spending
patterns as the economy shifts. Software specific to the industry can give
financial service providers truly impressive feedback in these areas.
Who's in the CRM game?
About 50% of the CRM market is currently divided between five major players
in the industry: PeopleSoft, Oracle, SAP, Siebel and relative newcomer
Telemation, based on Linux and developed by an old standard, Database Solutions,
The other half of the market falls to a variety of other players, although
Microsoft's new emergence in the CRM market may cause a shift soon. Whether
Microsoft can capture a share of the market remains to be seen. However, their
brand-name familiarity may give them an edge with small businesses considering a
first-time CRM package.
PeopleSoft was founded in the mid-1980's by Ken Morris and Dave
Duffield as a client-server based human resources application. In 1998,
PeopleSoft had evolved into a purely Internet based system, PeopleSoft 8.
There's no client software to maintain and it supports over 150 applications.
PeopleSoft 8 is the brainchild of over 2,000 dedicated developers and $ 500
million in research and development.
PeopleSoft branched out from their original human resources platform in the
1990's and now supports everything from customer service to supply chain
management. Its user-friendly system required minimal training is relatively
inexpensive to deploy. .
One of PeopleSoft's major contributions to CRM was their detailed analytic
program that identifies and ranks the importance of customers based on numerous
criteria, including amount of purchase, cost of supplying them, and frequency of
Oracle built a solid base of high-end customers in the late 1980's,
then burst into national attention around 1990 when, under Tom Siebel, the
company aggressively marketed a small-to-medium business CRM solution.
Unfortunately they could not follow up themselves on the incredible sales they
garnered and ran into a few years of real problems.
Oracle landed on its feet after a restructuring and their own refocusing on
customer needs and by the mid-1990's the company was once again a leader in CRM
technologies. They continue to be one of the leaders in the enterprise
marketplace with the Oracle Customer Data Management System.
Telemation's CRM solution is flexible and user-friendly, with a
toolkit that makes changing features and settings relatively easy. The system
also provides a quick learning environment that newcomers will appreciate. Its
uniqueness lies in that, although compatible with Windows, it was developed as a
Linux program. Will Linux be the wave of the future? We do not know, but if it
is, Telemation's ahead of the game.
The last few years …
In 2002, Oracle released their Global CRM in 90 Days package that promised
quick implementation of CRM throughout company offices. Offered with the package
was a set fee service for set-up and training for core business needs. .
Also in 2002 (a stellar year for CRM), SAP America's mySAP began using a
"Middleware" hub that was capable of connecting SAP systems to externals and
front and back office systems for a unified operation that links partners,
employees, process and technologies in a closed-loop function.
consistently based its business primarily on enterprise size businesses willing
to invest millions in CRM systems, which worked for them to the tune of $ 2.1
billion in 2001. However, in 2002 and 2003 revenues slipped as several smaller
CRM firms joined the fray as ASP's (Application Service Providers). These
companies, including UpShot, NetSuite and SalesNet, offered businesses CRM-style
tracking and data management without the high cost of traditional CRM start-up.
In October of 2003, Siebel launched CRM OnDemand in collaboration IBM with.
Their entry into the hosted, monthly CRM solution niche hit the marketplace with
gale force. To some of the monthly ASP's it was a call to arms, to others it was
a sign of Siebel's increasing confusion over brand identity and increasing loss
of market share. In a stroke of genius, Siebel acquired UpShot a few months
later to get them started and smooth their transition into the ASP market. It
was a successful move.
With Microsoft now in the game, it's too soon to tell
what the results will be, but it seems likely that they may get some share of
small businesses that tend to buy based on familiarity and usability. ASP's will
continue to grow in popularity as well, especially with mid-sized businesses, so
companies like NetSuite, SalesNet and Siebel's OnDemand will thrive. CRM on the
web has come of age!
This article on the "The History of CRM" reprinted with

Copyright © 2004-2005 Evaluseek Publishing.


Source by Lucy P. Roberts

The Benefits of Vtiger CRM for Your Business


The Vtiger CRM is a type of enterprise-ready Open Source CRM software principally designed for small and medium sized companies. It combines the advantages of Open Source software with additional enterprise features which adds more value to the end user.

It is a professional CRM application that is fully featured with no ongoing license fees and 100% Open Source. Furthermore, the setup cost is very low and there are no per-seat fees. If you want to customize it to suit your business process and systems, there is the opportunity for you. It is fully integrated with a range of third party software systems and there is also the availability of onsite or hosted cloud solutions.

Moreover, there is no upfront capital expenditure as well as the possibility of allowing unlimited users and unlimited traffic. No matter where you are in the world or what language you speak, the Vtiger CRM is international and multilingual. It runs on an SSL secured 128bit encrypted web access and has a short implementation time thereby giving you a quick ROI. It is also integrated to ERP systems with the provision of web portal for customers and partners as well as the integration of Microsoft Outlook, Office, Mozilla Firefox and Thunderbird.

The installation of Vtiger CRM is very easy as all the necessary software like Apache, MySQL and PHP are integrated and executables are made accessible for Windows and Linux (RedHat, Debian, SuSe, Fedora and Manddrake) operating systems in SourceForge.net. As a result of this, you do not need to be concerned about setting up database, web server and other software.

Furthermore, the Vtiger CRM provides customer relationship management solution for small and medium sized companies with a well loaded features on a secured, customizable platform. It is a web-based, platform-independent CRM and Groupware system that is centred on Open Source technologies which helps you in formulating strategies for cross-departmental processes that will allow you to methodically develop your existing and new customer relationships.

It supports your business 'internal processes and employees in sales, marketing, customer service and back-office personnel, to better organize your customers' data like accounts and contacts, sales leads, potentials and pipelines, quotes, sales orders as well as trouble tickets and products knowledgebase, and so much more. As a result of this, if you want an improved customer service which will in turn lead to more sales and profitability for your business, a Vtiger CRM is surely your best bet.


Source by Olushola George Otenaike

Programming Languages ​​and Frameworks You Should Learn In 2016


The programming languages ​​and frameworks trend for 2016 seems to be heading more frontend development over backend development. Below is just a simplified list of what you should take note of and consider improving your knowledge on.

Languages ​​and Platforms

PHP 7 is the latest version of PHP. Big websites like Facebook, Google and Apple use PHP. PHP 7 is also two times faster than the previous version 5.6 – this will have a huge improvement on CMS systems like WordPress and Drupal.

JavaScript also has a new update called ES2015 (previously ES5). Some incredible sites that use JavaScript are Lost Worlds Fairs and Cascade Brewery Co.

Python 3.5 was released in 2015 with some juicy features like Asyncio. Nearly all libraries are available for Python 3 so it might be a good time to upgrade your legacy code base now.

Node.js has the largest ecosystem of open source libraries in the world. Node.js is always a good study choice and with its long term support release, it provides added stability going forward. LinkedIn and Walmart use some aspects of Node.js on their websites.

Swift 2 was released earlier this year and it's growing rapidly (it's the fastest growing programming language in history!). It's open source and it has already been ported on Linux which means that it is now possible to build backends and server side software. It's built by Apple (not the granny smith apple) and they have big plans for it so it would be good to take note of it as the popularity grows.

HTML5 is last and certainly not the least. It's the one you need to watch out for! YouTube switched from Flash to HTML5 this year and Adobe Animate's exports are now defaulted to HTML5. It's also one of the fastest growing job trends on indeed.com which shows its popularity. HTML5 is probably one of the best long term languages ​​to study within the next 3 years. Some sites that make use of HTML5 are Ford, Peugeot and Lacoste – they are really cool.

Frontend Frameworks (CSS Frameworks)

These complete frameworks offer features like icons and other reusable components for navigation, sets of forms, styled-typography, buttons, popovers, alerts and more.

Bootstrap has become very popular in 2015 and this popularity is only going to increase in 2016 as it is turning into a web development standard. Version 4 is coming out soon and it will integrate with SASS. It's quite easy to learn and it comes with some neat extensions and examples too.

Foundation is an alternative to Bootstrap. In 2015 they launched Version 6, which focuses on modularity so that you can only include the pieces that you need for a faster loading time and it's also built with SASS.

Skeleton is a sexy (there's no other word to explain it) boilerplate for responsive, mobile-friendly development. Skeleton is a small collection of CSS files that help you to develop sites quickly and beautifully that look incredible on all screen sizes.

Backend Frameworks

Backend frameworks or application layers is the 'brain' of the website. It's how the website operates and the logic behind it. You are developing the 'brain' whereas in Frontend, you are creating the 'face'.

Depending on which language you prefer, there are plenty of choices. Below is a list of a few languages ​​with some of their frameworks:

PHP: Symfony, Zend, Laravel, Slim, Codeigniter and CakePHP
Node.js: Express, Hapi, Sails.js and Total.js
JavaScript: Angular.js, Vue.js, Polymer, React and Ember.js
Ruby: Rails and Sinatra
Java: Play, Spring and Spark
Python: Django and Flask

Frameworks can be very useful, but it does not necessarily mean that it will be useful for you. Ultimately, it is the developer's decision on whether or not to use a framework. This will depend on several factors depending on what you want to achieve. Go through each framework and see if it aligns with what you want to achieve before you start utilizing it.

CMS (Content Management Systems)

This article would not be complete without mentioning 2 popular CMSs like WordPress and Drupal. Both are written in PHP and with the new PHP 7 release, it's even faster.

WordPress has evolved from a dry blogging CMS to a fully-fledged CMS / Framework with plugins that make almost anything possible. Thousands of developers make a living as a WordPress developer by creating premium themes or plugins. You can also use WordPress as a REST API backend.

Drupal 8 was released in 2015. It makes use of Symfony 2, Composer packages and the Twig templating engine. A few websites that are run on Drupal are: Johnson & Johnson, BBC Store and World Economic Forum. Drupal is ideal for content heavy websites.

If you are in doubt about what to spend time studying in 2016, we've made a list of 5 frameworks we believe you should invest your time in:

  1. Bootstrap
  2. Angular.js
  3. Ruby on Rails
  4. HTML5
  5. Laravel

As a 6th recommendation, we recommend that you add Git to your list of what to learn in 2016. It's growing like crazy and it's only going to grow in popularity. Companies like Google, Facebook, Microsoft, Twitter and LinkedIn make use of Git.

This is just a short summary of programming languages ​​and frameworks we think you should learn in 2016. Of course there are hundreds of other languages ​​and frameworks out there, but I hope this was of value to you.


Source by Kyle Prinsloo

Asterisk Cisco Vs – Avaya VOIP Telephone Systems


VoIP or Voice Over IP, the latest in wireless communication works by taking the phone call, changing from analog to digital signals and transmitting these signals over an IP network or broadband and finally terminating it on a PSTN. Call charges are greatly reduced using this technology. The advantage is that software emulating a phone can be loaded on your laptop thereby enabling you to access its services even while you travel.

VoIP SIP uses (Session Initiation Protocol), a peer-to-peer technology that allows computers to communicate with each other without having calls routed through some central station. Therefore, calling from one SIP enabled phone to another cuts call charges drastically.

The Asterisk System comes with an Asterisk server which manages things like teleconferencing, voice mails, queues and hold music. The hard phone is a digital phone that has an Ethernet jack to communicate with the server using the SIP protocol. They, including the wireless version, are not very expensive. The soft phone are implemented in software and can be attached to a PC. Asterisk runs predominantly on Linux, an open source operating system.

Cisco has telephony solutions that are network based and run on a router. They are scalable and work well in multi-user environments in mulitple locations. The UC500 suite is a bundle of services like router, switches, security, telephony and wireless functionality in a single device. This greatly reduces costs for a company which is planning on these services. The Cisco CallManager Express uses SIP to connect phones through the Internet and also has the features of UC 500 making it more viable for medium scale businesses. Additonal features are paging, intercom, ICMP and class of restrictions on a user's calls.

Avaya IP Office uses IP technology to deliver voice and data communication, messaging and customer management over multiple locations with 2 to 300 people. It allows you to work from anywhere, host conferences, integrate applications, measure and improve customer satisfaction at the touch of a button. It is cost effective as it lowers long-distance calls, conferencing fees, supports remote workers and helps keep your business collaborated and up-to-date.

The three products can be compared based on the following few criteria:

• Number of extensions: Asterix can support upto 100 extensions while Cisco and Avaya can go upto 360 extensions thereby suporting large organizations as well. This improves the scalability and helps to reduce costs in the long run.
• Freeware: Asterisk is freeware and runs on a Linux server. This makes the telephony solution cheaper than either Cisco or Avaya which make extensive use of routers and switches for communication.
• Installation and maintenance: Asterisk is a programmer's dream as it is open source and can be changed at his will. However, for an end-user, it may be a nightmare. Support and services are better Cisco with and Avaya which are established names in the industry.

The main thing going for Asterisk is its cost. However, it is not always advisable to look at the initial cost of things. Other criteria like scalability, integrating of one device with others already existing, interoperatability and long run costing should be considered while choosing one product over another.


Source by Scott Camball

File Integrity Monitoring – PCI DSS Requirements 10, 10.5.5 and 11.5


Although FIM or File-Integrity Monitoring is only mentioned specifically in two sub-requirements of the PCI DSS (10.5.5 and 11.5), it is actually one of the more important measures in securing business systems from card data theft.

What is it, and why is it important?

File Integrity monitoring systems are designed to protect card data from theft. The primary purpose of FIM is to detect changes to files and their associated attributes. However, this article provides the background to three different dimensions to file integrity monitoring, namely:

– Secure hash-based FIM, used predominantly for system file integrity monitoring
– File contents integrity monitoring, useful for configuration files from firewalls, routers and web servers
– File and / or folder access monitoring, vital for protecting sensitive data

Secure Hash Based FIM

Within a PCI DSS context, the main files of concern include:

– System files eg anything that resides in the Windows / System32 or SysWOW64 folder, program files, or for Linux / Unix key kernel files

The objective for any hash-based file integrity monitoring system as a security measure is to ensure that only expected, desirable and planned changes are made to in scope devices. The reason for doing this is to prevent card data theft via malware or program modifications.

Imagine that a Trojan is installed onto a Card Transaction server – the Trojan could be used to transfer card details off the server. Similarly, a packet sniffer program could be located onto an EPoS device to capture card data – if it was disguised as a common Windows or Unix process with the same program and process names then it would be hard to detect. For a more sophisticated hack, what about implanting a 'backdoor' into a key program file to allow access to card data ??

These are all examples of security incidents where File-Integrity monitoring is essential in identifying the threat.

Remember that anti-virus defenses are typically only aware of 70% of the world's malware and an organization hit by a zero-day attack (zero-day marks the point in time when a new form of malware is first indentified – only then can a remediation or mitigation strategy be formulated but it can be days or weeks before all devices are updated to protect them.

How far should FIM measures be taken?

As a starting point, it is essential to monitor the Windows / System32 or SysWOW64 folders, plus the main Card Data Processing Application Program Folders. For these locations, running a daily inventory of all system files within these folders and identifying all additions, deletions and changes. Additions and Deletions are relatively straightforward to identify and evaluate, but how should changes be treated, and how do you assess the significance of a subtle change, such as a file attribute? The answer is that ANY file change in these critical locations must be treated with equal importance. Most high-profile PCI DSS security breaches have been instigated via an 'inside man' – typically a trusted employee with privileged admin rights. For today's cybercrime there are no rules.

The industry-acknowledged approach to FIM is to track all file attributes and to record a secure hash. Any change to the hash when the file-integrity check is re-run is a red alert situation – using SHA1 or MD5, even a microscopic change to a system file will denote a clear change to the hash value. When using FIM to govern the security of key system files there should never be any unplanned or unexpected changes – if there are, it could be a Trojan or backdoor-enabled version of a system file.

Which is why it also crucial to use FIM in conjunction with a 'closed loop' change management system – planned changes should be scheduled and the associated File Integrity changes logged and appended to the Planned Change record.

File Content / Config File Integrity Monitoring

Whilst a secure hash checksum is an infallible means of identifying any system file changes, this does only tell us that a change has been made to the file, not what that change is. Sure, for a binary-format executable this is the only meaningful way of conveying that a change has been made, but a more valuable means of file integrity monitoring for 'readable' files is to keep a record of the file contents. This way, if a change is made to the file, the exact change made to the readable content can be reported.

For instance, a web configuration file (php, aspnet, js or javascript, XML config) can be captured by the FIM system and recorded as readable text; thereafter changes will be detected and reported directly.

Similarly, if a firewall access control list was edited to allow access to key servers, or a Cisco router startup config altered, then this could allow a hacker all the time needed to break into a card data server.

One final point on file contents integrity monitoring – Within the Security Policy / Compliance arena, Windows Registry keys and values ​​are often included under the heading of FIM. These need to be monitored for changes as many hacks involve modifying registry settings. Similarly, a number of common vulnerabilities can be identified by analysis of registry settings.

File and / or Folder Access Monitoring

The final consideration for file integrity monitoring is how to handle other file types not suitable for secure hash value or contents tracking. For example, because a log file, database file etc will always be changing, both the contents and the hash will also be constantly changing. Good file integrity monitoring technology will allow these files to be excluded from any FIM template.

However, card data can still be stolen without detection unless other measures are put in place. As an example scenario, in an EPoS retail system, a card transaction or reconciliation file is created and forwarded to a central payments server on a scheduled basis throughout the trading day. The file will always be changing – maybe a new file is created every time with a time stamped name so everything about the file is always changing.

The file would be stored on an EPoS device in a secure folder to prevent user access to the contents. However, an 'inside man' with Admin Rights to the folder could view the transaction file and copy the data without necessarily changing the file or its attributes. Therefore the final dimension for File Integrity Monitoring is to generate an alert when any access to these files or folders is detected, and to provide a full audit trail by account name of who has had access to the data.

Much of PCI DSS Requirement 10 is concerned with recording audit trails to allow a forensic analysis of any breach after the event and establish the vector and perpetrator of any attack.


Source by Mark Kedgley

Advantages of Lamp Server for Web Hosting


Dedicated website hosting is termed as using one web server utilizing all of its resources in order to handle sheer amounts of data of a single website. High traffic and million hits per day touching websites like Freelancer, Odesk, eBay, Amazon, & Microsoft etc. use dedicated servers for hosting their online businesses. As an online businessman, there is a definite need of having dedicated LAMP server for website hosting. Sadly speaking many people still entrepreneurs still try to save their costs and forget about getting dedicated LAMP server hosting for their e-commerce business.

When you are getting thousands of hits per day, it indicates that your website is doing pretty good in terms of attracting visitors. It's however this very same traffic could become quite a big pain in the beck if not handled properly. Handling high volumes of traffic with sheer amounts of data is virtually impossible with shared web hosting or cheap dedicated web hosting packages. This results in servers getting crashed leading to important files being missed and data lost. As a result here you would need services of professional and dedicated web hosting services in order to handle high traffic in the long run.

LAMP server is backbone of any e-commerce related business. LAMP server is a combination of Linux OS, Apache web server, MySQL database management system, and PHP / Perl / Python web programming languages. For any business, hiring dedicated services for web hosting is a hefty cost to be paid but looking for long term benefits, it is definitely going to pay off. Linux is the best operating system to be used for hosting web sites. It is fast, secure and improves pages' loading times even when there is high traffic on your website. Apache is a web serve used for hosting websites that has crossed 100 million mark of website hosting on its server. MySQL is the most popular, free and open source database management system that is used to handle customer records and various other data for every website. PHP / Perl / Python are three programing languages ​​and either one of these could be used in order to complete the LAMP server configuration.

In order for proper functioning of your website, you need to be sure that the LAMP server dedicated web hosting services are reliable enough. Check the company's previous performance based on customer reviews. Some web hosts require system reboots after software updates or minor installations. While this is something normal for a server to take place, but the ecommerce business hosted on that server would get a big shock of loss in revenues and profits. No visitor would be able to browse the website when server is down, restarted or crashed.

Hence it is very important to note that the server has minimum rebooting so that the website is up and live on the internet. Dedicated LAMP server assures maximum performance, security and quick page loading times minimizing chances of server crashes. This protects attacks from malware, and malicious viruses.

When you are hosting your website especially on Europe's server, then go for LAMP server hosting as it has the best combination of sites getting hosted


Source by Yasir Saeed

Nagios Log Monitoring – Monitor Log Files in Unix Effectively


Nagios Log File Monitoring: Monitoring log files using Nagios can be just as difficult as it is with any other monitoring application. However, with Nagios, once you have a log monitoring script or tool that can monitor a specific log file the way you want it monitored, Nagios can be relied upon to handle the rest. This type of versatility is what makes Nagios one of the most popular and user friendly monitoring application that there is out there. It can be used to effectively monitor anything. Personally, I love it. It has no equal!

My name is Jacob Bowman and I work as a Nagios Monitoring specialist. I've come to realize, given the number of requests I receive at my job to monitor log files, that log file monitoring is a big deal. IT departments have the ongoing need to monitor their UNIX log files in order to ensure that application or system issues can be caught in time. When issues are known about, unplanned outages can be avoided altogether.

But the common question often asked by many is, what monitoring application is available that can effectively monitor a log file? The plain answer to this question is NONE! The log monitoring applications that does exist require way too much configuration, which in effect renders them not worthy of consideration.

Log monitoring should allow for pluggable arguments on the command line (instead of in separate config files) and should be very easy for the average UNIX user to understand and use. Most log monitoring tools are not like this. They are often complex and require time to get familiar with (through reading endless pages of installation setups). In my opinion, this is unnecessary trouble that can and should be avoided.

Again, I strongly believe, in order to be efficient, one must be able to run a program directly from the command line without needing to go elsewhere to edit config files.

So the best solution, in most cases, is to either write a log monitoring tool for your particular needs or download a log monitoring program that has already been written for your type of UNIX environment.

Once you have that log monitoring tool, you can give it to Nagios to run at any time, and Nagios will schedule it to be kicked off at regular intervals. If after running it at the set intervals, Nagios finds the issues / patterns / strings that you tell it to watch for, it will alert and send out notifications to whoever you want them sent to.

But then you wonder, what type of log monitoring tool should you write or download for your environment?

The log monitoring program that you should obtain to monitor your production log files must be as simple as the below but must still remain powerfully versatile:

Example: logrobot / var / log / messages 60 'error' 'panic' 5 10 -foundn

Output: 2 — 1380 — 352 — ATWF — (Mar / 1) – (16:15) — (Mar / 1) – (17:15:00)


The "-foundn" option searches the / var / log / messages for the strings "error" and "panic". Once it finds it, it'll either abort with an 0 (for OK), 1 (for WARNING) or 2 (for CRITICAL). Each time you run that command, it'll provide a one line statistic report similar to that in the above Output. The fields are delimited by the "—".

1st field is 2 = which means, this is critical.

2nd field is 1380 = number of seconds since the strings you specified last occurred in the log.

3rd field is 352 = there were 352 occurrences of the string "error" and "panic" found in the log within the last 60 minutes.

4th field is ATWF = Do not worry about this for now. Irrelevant.

5th and 6th field means = The log file was searched from (Mar / 1) – (16:15) to (Mar / 1) – (17:15:00). And from the data gathered from that timeframe, 352 occurrences of "error" and "panic" were found.

If you would actually like to see all 352 occurrences, you can run the below command and pass the "-show" option to the logrobot tool. This will output to the screen all matching lines in the log that contain the strings you specified and that were written to the log within the last 60 minutes.

Example: logrobot / var / log / messages 60 'error' 'panic' 5 10 -show

The "-show" command will output to the screen all the lines it finds in the log file that contains the "error" and "panic" strings within the past 60 minute time frame you specified. Of course, you can always change the parameters to fit your particular needs.

With this Nagios Log Monitoring tool (logrobot), you can perform the magic that the big name famous monitoring applications can not come close to performing.

Once you write or download a log monitoring script or tool like the one above, you can have Nagios or CRON run it on a regular basis which will in turn enable you to keep a bird's eye view on all the logged activities of your important servers.

Do you have to use Nagios to run it on a regular basis? Absolutely not. You can use whatever you want.


Source by Jonathan Rayson