What Is Wardriving And How Can You Prevent It

[ad_1]

Imagine a car equipped with nothing more than a laptop computer, a portable GPS receiver, and a wireless network card slowly strolls through your neighborhood. Unknown to any onlookers, this is no ordinary vehicle; rather, it is a wardriving machine. As the car strolls past homes and businesses, a wireless network card (available at any electronics store for as low as $ 25) scans for any wireless access points. Anyone with a wireless network (and there are many out there) is vulnerable. The computer is looking for what is called an SSID. An SSID is your wireless network name and it is being constantly transmitted by your access point, letting computers know of its presence. The wardriver uses software such as Netstumbler (for windows) or Cismet (for linux) to scan the airwaves for SSIDs. The program can track various access points at once and monitor the signal strength. These programs can also check to see if the network is encrypted. The wardriver will generally configure his or her software to log any strong unencrypted signals. Using the GPS receiver, the coordinates of the strong signal will be recorded. After this preliminary drive, the wardriver can return to the locations that were recorded, and connect to the access point. Once connected to an unencrypted network, the wardriver can use the victim's internet access, and can also explore computers on the network. If files are being shared within someone's private network, all of that information is susceptible to a wardriver. Furthermore, once in the network, a wardriver can sniff network traffic and can view any information such as passwords and credit card numbers you send out to the internet – even SSL secured data. Wireless network vulnerability is a major problem, and as more and more households purchase wireless technology, the problem of insecure networks increases. Sound scary? Well this happens every day, and it does not take an expert to pull off. It does not take an expert to protect against either, however.

Steps you can take to protect against wardrivers:

There are a number of very simple steps you can take to protect your wireless network. For many of these, you will have to access your router configuration utility (check your manual on how to do this, you will generally need to type an IP address into your browser such as 192.168.0.1 or 192.168.1.1).

Do not broadcast your SSID. If you are broadcasting your SSID, this is the first thing a program will pickup and recognize. If you configure your router to not broadcast your SSID, it will be difficult to detect (but not impossible, for some software can sniff wireless communication, so if you are using your wireless network, the SSID can be revealed). If you are not broadcasting your SSID, but it can be guessed (such as if you are using a default SSID), cloaking is pointless. Due to this, remember to change your SSID from the factory default. This is not a 100 percent effective method to secure your network, but it is a good first line of defense.

Change the default password. When you buy a router, a factory password is stored. People experienced in working with routers know the default passwords for different routers (and the make of the router can be seen by wardriver software such as netstumbler). It is important that you secure your router with a good password.

Encrypt your wireless communication. I can not stress the importance of encrypting your wireless communication enough. Enable encryption and enter a key. Most routers are only capable of WEP encryption, but if they permit, use EAP encryption, it's more secure than WEP. Like cloaking your SSID, encryption is not 100 percent secure. Given enough time and determination, if someone wants to target you and access your network, WEP encryption can be bypassed using software such as AirSnort.

Filter the MAC addresses that are allowed to connect to your router. This would require that you enter your router configuration and input the MAC address of each wireless card you have. This will restrict access so that only your computers can connect to the router. You will need to obtain the MAC address (which is the individual identification address of a network card in the form a 12 digit hexadecimal number). If someone sniffs traffic and detects the MAC address of a computer wirelessly using your network, the wardriver could emulate that address and connect to the router, but this takes time.

If you configure file sharing on your computers, make sure it is password protected. You should not share files on your networked computers unless it requires an authenticated user to access. Set up the same user accounts on your machines so that your computers can share files.

With these relatively simple steps, wireless network users can secure their networks from wardrivers. Wireless networks are inherently insecure, and these tips will merely help you greater secure your network. If someone is really determined to gain access to your network, given enough time, a good hacker can get access. These tips will deter the average wardriver from gaining access to your network, however. Although these methods are not definite security measures, they will change your network from being something that can be hacked in a matter of seconds, to something that will take a determined hacker days if not weeks of work, all of which will have to be done while in close proximity to your network.

[ad_2]

Source by

The Elements of an Operating System

[ad_1]

This article is aimed at giving you an overview of the various elements which make up an operating system. Now as you are probably aware, an Operating System, whether it be Windows, Linux Or Mac, serves the purpose of giving us, the human user, a means to interact with the computer in a meaningful way.

Imagine, if you can, that an operating system is broken down into five layers. in the following list I'll start at the bottom most layer and work my way up to the very top.

Layer 1: The Kernel.

The kernel is the heart of the operating system. Amongst it's responsibilities are ensuring that each running process is given a fair amount of time to execute while a controlling the amount of resources each process can use.

Layer 2: Memory Management.

The name of this layer gives you a good idea what it is all about. It is the responsibility of this layer to share your computers physical memory among the processes which want to use it. It also has to manage such situations where there may not be enough physical memory to share out.

Layer 3: Input / Output.

On this layer all the physical communication between your computers hardware, such as disk drives, keyboards, mouses, screens and so on, takes place.

Layer 4: File Management.

Again the name of this layer may give you a clue as to what it does. It is the job of this layer to control how the files on your computers hard drive are stored and accessed by any application seeking to use them.

Layer 5: The User Interface.

The last element, or layer as we have been calling them, of an operating system is the User Interface. This layer is probably the easiest of all to understand since it is the first thing you see when your operating system has logged you in. It is the job of this layer to provide a means for the user to actually interact with the rest of the layers and as such the system as a whole.

Keep in mind there are two different types of User interfaces. The first one is probably the one you are most familiar with, the graphical user interface, which is where you see windows and icons for each of your files and so on.

The second is a command line interface, or text based interface where a user would interact with the system using text based commands.

Well that is it for this article, if your an experienced IT pro or tech guru, before you go placing comments that I've skimmed on certain details please keep in mind that i have deliberately kept this article simple so the people new to computing in general fin dit easier to understand. With that said I hope you enjoyed this article.

– David

[ad_2]

Source by David Gallie

3 Things You Should Know About DVJU Files

[ad_1]

The average computer may come across files on his computer with various file extensions. These are the three and four letters that come at the end of a file name. For example, something named myfile.pdf is something in PDF format. Users and a computer's operating system can identify the type of file by its extension. You may come across some that have the file extension DJVU. Here are three things that you should know about these types of files.

What Does DJVU Mean?

The file extension DJVU indicates that you have a DJVU file. It is an image which may be a photograph or some sort of document. The document could be digital or it could have been scanned. DJVU is a form of image compression technology that was developed by AT & T. It allows for the distribution of very clear, high-resolution images on the Internet. Images of all sorts can be placed online and will be of the highest quality.

Programs That Will Open These Files

There are a variety of programs that are able to open these types of images. For Windows, there is WinDjView, DjVuLibre DjView, ACD Systems Canvas 14, and ACD Systems ACDSee 15. For those using the Mac operating system, MacDjView will open these as will DjVuLibre DjView, and SST DjVuReader. Linux users can use DjVuLibre DjView and KDE Okular to open DJVU images.

What If They Will Not Open?

From time to time, you may come across a DJVU image that will not open. Sometimes it is because it is corrupted and will not open. No matter how many adjustments you make or what tricks you try, it will not open. In that case, the most likely problem is a corruption. When that happens, you will need to find a different version and download it.

Another common problem is not having the right version of the application to open the file. While it appears that you may be able to open it, it will not. In that case, you need to download any updates to make sure the version of your application is correct. If you have downloaded the latest version and it still does not work, you may have a different problem. The problem would be with your operating system. It may not know which program to use to open the DJVU file. The problem can be rectified rather easily. You will manually tell the computer which program to use.

[ad_2]

Source by Viktoria Carella

The Evolution of Python Language Over the Years

[ad_1]

According to several websites, Python is one of the most popular coding languages ​​of 2015. Along with being a high-level and general-purpose programming language, Python is also object-oriented and open source. At the same time, a good number of developers across the world have been making use of Python to create GUI applications, websites and mobile apps. The differentiating factor that Python brings to the table is that it enables programmers to flesh out concepts by writing less and readable code. The developers can further take advantage of several Python frameworks to mitigate the time and effort required for building large and complex software applications.

The programming language is currently being used by a number of high-traffic websites including Google, Yahoo Groups, Yahoo Maps, Linux Weekly News, Shopzilla and Web Therapy. Likewise, Python also finds great use for creating gaming, financial, scientific and educational applications. However, developers still use different versions of the programming language. According to the usage statistics and market share data of Python posted on W3techs, currently Python 2 is being used by 99.4% of websites, whereas Python 3 is being used only by 0.6% of websites. That is why, it becomes essential for each programmer to understand different versions of Python, and its evolution over many years.

How Python Has Been Evolving over the Years?

Conceived as a Hobby Programming Project

Despite being one of the most popular coding languages ​​of 2015, Python was originally conceived by Guido van Rossum as a hobby project in December 1989. As Van Rossum's office remained closed during Christmas, he was looking for a hobby project that will keep him occupied during the holidays. He planned to create an interpreter for a new scripting language, and named the project as Python. Thus, Python was originally designed as a successor to ABC programming language. After writing the interpreter, Van Rossum made the code public in February 1991. However, at present the open source programming language is being managed by the Python Software Foundation.

Version 1 of Python

Python 1.0 was released in January 1994. The major release included a number of new features and functional programming tools including lambda, filter, map and reduce. The version 1.4 was released with several new features like keyword arguments, built-in support for complex numbers, and a basic form of data hiding. The major release was followed by two minor releases, version 1.5 in December 1997 and version 1.6 in September 2000. The version 1 of Python lacked the features offered by popular programming languages ​​of the time. But the initial versions created a solid foundation for development of a powerful and futuristic programming language.

Version 2 of Python

In October 2000, Python 2.0 was released with the new list comprehension feature and a garbage collection system. The syntax for the list comprehension feature was inspired by other functional programming languages ​​like Haskell. But Python 2.0, unlike Haskell, gave preference to alphabetic keywords over punctuation characters. Also, the garbage collection system effectuated collection of reference cycles. The major release was followed by several minor releases. These releases added a number of functionality to the programming language like support for nested scopes, and unification of Python's classes and types into a single hierarchy. The Python Software Foundation has already announced that there would be no Python 2.8. However, the Foundation will provide support to version 2.7 of the programming language till 2020.

Version 3 of Python

Python 3.0 was released in December 2008. It came with a several new features and enhancements, along with a number of deprecated features. The deprecated features and backward incompatibility make version 3 of Python completely different from earlier versions. So many developers still use Python 2.6 or 2.7 to avail the features deprecated from last major release. However, the new features of Python 3 made it more modern and popular. Many developers even switched to version 3.0 of the programming language to avail these awesome features.

Python 3.0 replaced print statement with the built-in print () function, while allowing programmers to use custom separator between lines. Likewise, it simplified the rules of ordering comparison. If the operands are not organized in a natural and meaningful order, the ordering comparison operators can now raise a TypeError exception. The version 3 of the programming language further uses text and data instead of Unicode and 8-bit strings. While treating all code as Unicode by default it represents binary data as encoded Unicode.

As Python 3 is backward incompatible, the programmers can not access features like string exceptions, old-style classes, and implicit relative imports. Also, the developers must be familiar with changes made to syntax and APIs. They can use a tool called "2to3" to migrate their application from Python 2 to 3 smoothly. The tool highlights incompatibility and areas of concern through comments and warnings. The comments help programmers to make changes to the code, and upgrade their existing applications to the latest version of programming language.

Latest Versions of Python

At present, programmers can choose either version 3.4.3 or 2.7.10 of Python. Python 2.7 enables developers to avail improved numeric handling and enhancements for standard library. The version further makes it easier for developers to migrate to Python 3. On the other hand, Python 3.4 comes with several new features and library modules, security improvements and CPython implementation improvements. However, a number of features are deprecated in both Python API and programming language. The developers can still use Python 3.4 to avail support in the longer run.

Version 4 of Python

Python 4.0 is expected to be available in 2023 after the release of Python 3.9. It will come with features that will help programmers to switch from version 3 to 4 seamlessly. Also, as they gain experience, the expert Python developers can take advantage of a number of backward compatible features to modernize their existing applications without putting any extra time and effort. However, the developers still have to wait many years to get a clear picture of Python 4.0. However, they must monitor the latest releases to easily migrate to the version 4.0 of the popular coding language.

The version 2 and version 3 of Python are completely different from each other. So each programmer must understand the features of these distinct versions, and compare their functionality based on specific needs of the project. Also, he needs to check the version of Python that each framework supports. However, each developer must take advantage of the latest version of Python to avail new features and long-term support.

Harri has an avid interest in Python and loves to blog interesting stuff about the technology. The recently wrote an He , interesting blog on the Python Http://www.allaboutweb.biz/category/python/ .

[ad_2]

Source by Harri Srivastav

Logging for the PCI DSS – How to Gather Server and Firewall Audit Trails for PCI DSS Requirement 10

[ad_1]

PCI DSS Requirement 10 calls for a full audit trail of all activity for all devices and users, and specifically requires all event and audit logs to be gathered centrally and securely backed up. The thinking here is twofold.

Firstly, as a pro-active security measure, the PCI DSS requires all logs to be reviewed on a daily basis (yes – you did read that correctly – Review ALL logs DAILY – we shall return to this potentially overwhelming burden later …) requires the Security Team to become more intimate with the daily 'business as usual' workings of the network. This way, when a genuine security threat arises, it will be more easily detected through unusual events and activity patterns.

The second driver for logging all activity is to give a 'black box' recorded audit trail so that if a cyber crime is committed, a forensic analysis of the activity surrounding the security incident can be conducted. At best, the perpetrator and the extent of their wrongdoing can be identified and remediated. At worst – lessons can be learned from the attack so that processes and / or technological security defenses can be improved. Of course, if you are a PCI Merchant reading this, then your main driver is that this is a mandatory PCI DSS requirement – so we should get moving!

Which Devices are within scope of PCI Requirement 10? Same answer as to which devices are within scope of the PCI DSS as a whole – anything involved with handling or with access to card data is within scope and we there for need to capture an audit trail from each of them. The most critical devices are the firewall, servers with settlement or transaction files and any Domain Controller for the PCI Estate, although all 'in scope' devices must be covered without exception.

How do we get Event Logs from ' in scope' PCI devices?

We'll take them in turn –

How do I get PCI Event Logs from Firewalls? – The exact command set varies between manufacturers and firewall versions but you will need to enable 'logging' via either the Firewall Web interface or the Command Line. Taking a typical example – a Cisco ASA – the CLI command sequence is as follows logging on no logging console no logging monitor logging abcd (where abcd is the address of your syslog server) logging trap informational This will make sure all 'Informational' level and above messages are forwarded to the syslog server and guarantee all logon and log off events are captured.

How do I get PCI Audit Trails from Windows Servers and EPoS / Tills? – There are a few more steps required for Windows Servers and PCs / EPoS devices. First of all it is necessary to make sure that logon and logoff events, privilege use, policy change and, depending on your application and how card data is handled, object access. Use the Local Security Policy You may also wish to enable System Event logging if you want to use your SIEM system to help troubleshoot and pre-empt system problems eg a failing disk can be preempted before complete failure by spotting disk errors. Typically we will need Success and Failure to be logged for each Event –

  • Account Logon Events- Success and Failure
  • Account Management Events- Success and Failure
  • Directory Service Access Events- Failure
  • Logon Events- Success and Failure
  • Object Access Events- Success and Failure
  • Policy Change Events- Success and Failure
  • Privilege Use Events- Failure
  • Process Tracking- No Auditing
  • System Events- Success and Failure

* Directory Service Access Events available on a Domain Controller only

** Object Access – Used in conjunction with Folder and File Auditing. Auditing Failures reveals attempted access to forbidden secure objects which may be an attempted security breach. Auditing Success is used to give an Audit Trail of all access to secured date, such as, card data in a settlement / transaction file / folder.

*** Process Tracking – not recommended as this will generate a large number of events. Better to use a specialized whitelisting / blacklisting technology l

**** System Events – Not required for PCI DSS compliance but often used to provided extra 'added value' from a PCI DSS initiative, providing early warning signs of problems with hardware and so pre-empt system failures. Once events are being audited, they then need to be relayed back to your central syslog server. A Windows Syslog agent program will automatically bind into the Windows Event logs and send all events via syslog. The added benefit of an agent like this is that events can be formatted into standard syslog severity and facility codes and also pre-filtered. It is vital that events are forwarded to the secure syslog server in real-time to ensure they are backed up before there is any opportunity to clear the local server event log.

Unix / Linux Servers – Enable logging using the syslogd daemon which is a standard part of all UNIX and Linux Operating Systems such as Red Hat Enterprise Linux, CentOS and Ubuntu. Edit the /etc/syslog.conf file and enter details of the syslog server.

For example, append the following line to the /etc/syslog.conf file

*. * @ (Abcd)

Or if using Solaris or other System 5-type UNIX

* .debug @abcd

* .info @ Abcd

* .notice @ Abcd

* .warning @ Abcd

* .err @ Abcd

* .crit @ Abcd

* .alert @ Abcd

* .emerg @ Abcd

Where abcd is the IP address of the targeted syslog server.

If you need to collect logs from a third-party application eg Oracle, then you may need to use specialized Unix Syslog agent which allows third-party log files to be relayed via syslog.

Other Network Devices Routers and Switches within the scope of PCI DSS will also need to be configured to send events via syslog. As was detailed for firewalls earlier, syslog is an almost universally supported function for all network devices and appliances. However, in the rare case that syslog is not supported, SNMP traps can be used provided the syslog server being used can receive and interpret SNMP traps.

PCI DSS Requirement 10.6 "Review logs for all system components at least daily" We have now covered how to get the right logs from all devices within scope of the PCI DSS but this is often the simpler part of handling Requirement 10. The aspect of Requirement 10 which often concerns PCI Merchants the most is the extra workload they expect by now being responsible for analyzing and understanding a potentially huge volume of logs. There is often a 'out of sight, out of mind' philosophy, or a 'if we can not see the logs, then we can not be responsible for reviewing them' mindset, since if logs are made visible and placed on the screen in front of the Merchant, there is no longer any excuse for ignoring them.

Tellingly, although the PCI DSS avoids being prescriptive about how to deliver against the 12 requirements, Requirement 10 specifically details "Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6". In practice it would be an extremely manpower-intensive task to review all event logs in even a small-scale environment and an automated means of analyzing logs is essential.

However, when implemented correctly, this will become so much more than simply a tool to help you cope with the inconvenient burden of the PCI DSS. An intelligent Security Information and Event Management system will be hugely beneficial to all troubleshooting and problem investigation tasks. Such a system will allow potential problems to be identified and fixed before they affect business operations. From a security standpoint, by enabling you to become 'intimate' with the normal workings of your systems, you are then well-placed to spot truly unusual and potentially significant security incidents.

More information go The For to Http://www.newnettechnologies.com

All material is copyright New Net Technologies Ltd.

[ad_2]

Source by Mark Kedgley

The History of CRM – Moving Beyond the Customer Database

[ad_1]

Customer Relationship Management (CRM) is one of those magnificent concepts
that swept the business world in the 1990's with the promise of forever changing
the way businesses small and large interacted with their customer bases. In the
short term, however, it proved to be an unwieldy process that was better in
theory than in practice for a variety of reasons. First among these was that it
was simply so difficult and expensive to track and keep the high volume of
records needed accurately and constantly update them.
In the last several years, however, newer software systems and advanced
tracking features have vastly improved CRM capabilities and the real promise of
CRM is becoming a reality. As the price of newer, more customizable Internet
solutions have hit the marketplace; competition has driven the prices down so
that even relatively small businesses are reaping the benefits of some custom
CRM programs.
In the beginning …
The 1980's saw the emergence of database marketing, which was simply a catch
phrase to define the practice of setting up customer service groups to speak
individually to all of a company's customers.
In the case of larger, key clients it was a valuable tool for keeping the
lines of communication open and tailoring service to the clients needs. In the
case of smaller clients, however, it tended to provide repetitive, survey-like
information that cluttered databases and did not provide much insight. As
companies began tracking database information, they realized that the bare bones
were all that was needed in most cases: what they buy regularly, what they
spend, what they do.
Advances in the 1990's
In the 1990's companies began to improve on Customer Relationship Management
by making it more of a two-way street. Instead of simply gathering data for
their own use, they began giving back to their customers not only in terms of
the obvious goal of improved customer service, but in incentives, gifts and
other perks for customer loyalty.
This was the beginning of the now familiar frequent flyer programs, bonus
points on credit cards and a host of other resources that are based on CRM
tracking of customer activity and spending patterns. CRM was now being used as a
way to increase sales passively as well as through active improvement of
customer service.
True CRM comes of age
Real Customer Relationship Management as it's thought of today really began
in earnest in the early years of this century. As software companies began
releasing newer, more advanced solutions that were customizable across
industries, it became feasible to really use the information in a dynamic way.

Instead of feeding information into a static database for future reference,
CRM became a way to continuously update understanding of customer needs and
behavior. Branching of information, sub-folders, and custom tailored features
enabled companies to break down information into smaller subsets so that they
could evaluate not only concrete statistics, but information on the motivation
and reactions of customers.
The Internet provided a huge boon to the development of these huge databases
by enabling offsite information storage. Where before companies had difficulty
supporting the enormous amounts of information, the Internet provided new
possibilities and CRM took off as providers began moving toward Internet
solutions.
With the increased fluidity of these programs came a less rigid relationship
between sales, customer service and marketing. CRM enabled the development of
new strategies for more cooperative work between these different divisions
through shared information and understanding, leading to increased customer
satisfaction from order to end product.
Today, CRM is still utilized most frequently by companies that rely heavily
on two distinct features: customer service or technology. The three sectors of
business that rely most heavily on CRM – and use it to great advantage – are
financial services, a variety of high tech corporations and the
telecommunications industry.
The financial services industry in particular tracks the level of client
satisfaction and what customers are looking for in terms of changes and
personalized features. They also track changes in investment habits and spending
patterns as the economy shifts. Software specific to the industry can give
financial service providers truly impressive feedback in these areas.
Who's in the CRM game?
About 50% of the CRM market is currently divided between five major players
in the industry: PeopleSoft, Oracle, SAP, Siebel and relative newcomer
Telemation, based on Linux and developed by an old standard, Database Solutions,
Inc.
The other half of the market falls to a variety of other players, although
Microsoft's new emergence in the CRM market may cause a shift soon. Whether
Microsoft can capture a share of the market remains to be seen. However, their
brand-name familiarity may give them an edge with small businesses considering a
first-time CRM package.
PeopleSoft was founded in the mid-1980's by Ken Morris and Dave
Duffield as a client-server based human resources application. In 1998,
PeopleSoft had evolved into a purely Internet based system, PeopleSoft 8.
There's no client software to maintain and it supports over 150 applications.
PeopleSoft 8 is the brainchild of over 2,000 dedicated developers and $ 500
million in research and development.
PeopleSoft branched out from their original human resources platform in the
1990's and now supports everything from customer service to supply chain
management. Its user-friendly system required minimal training is relatively
inexpensive to deploy. .
One of PeopleSoft's major contributions to CRM was their detailed analytic
program that identifies and ranks the importance of customers based on numerous
criteria, including amount of purchase, cost of supplying them, and frequency of
service.
Oracle built a solid base of high-end customers in the late 1980's,
then burst into national attention around 1990 when, under Tom Siebel, the
company aggressively marketed a small-to-medium business CRM solution.
Unfortunately they could not follow up themselves on the incredible sales they
garnered and ran into a few years of real problems.
Oracle landed on its feet after a restructuring and their own refocusing on
customer needs and by the mid-1990's the company was once again a leader in CRM
technologies. They continue to be one of the leaders in the enterprise
marketplace with the Oracle Customer Data Management System.
Telemation's CRM solution is flexible and user-friendly, with a
toolkit that makes changing features and settings relatively easy. The system
also provides a quick learning environment that newcomers will appreciate. Its
uniqueness lies in that, although compatible with Windows, it was developed as a
Linux program. Will Linux be the wave of the future? We do not know, but if it
is, Telemation's ahead of the game.
The last few years …
In 2002, Oracle released their Global CRM in 90 Days package that promised
quick implementation of CRM throughout company offices. Offered with the package
was a set fee service for set-up and training for core business needs. .
Also in 2002 (a stellar year for CRM), SAP America's mySAP began using a
"Middleware" hub that was capable of connecting SAP systems to externals and
front and back office systems for a unified operation that links partners,
employees, process and technologies in a closed-loop function.
Siebel
consistently based its business primarily on enterprise size businesses willing
to invest millions in CRM systems, which worked for them to the tune of $ 2.1
billion in 2001. However, in 2002 and 2003 revenues slipped as several smaller
CRM firms joined the fray as ASP's (Application Service Providers). These
companies, including UpShot, NetSuite and SalesNet, offered businesses CRM-style
tracking and data management without the high cost of traditional CRM start-up.
In October of 2003, Siebel launched CRM OnDemand in collaboration IBM with.
Their entry into the hosted, monthly CRM solution niche hit the marketplace with
gale force. To some of the monthly ASP's it was a call to arms, to others it was
a sign of Siebel's increasing confusion over brand identity and increasing loss
of market share. In a stroke of genius, Siebel acquired UpShot a few months
later to get them started and smooth their transition into the ASP market. It
was a successful move.
With Microsoft now in the game, it's too soon to tell
what the results will be, but it seems likely that they may get some share of
small businesses that tend to buy based on familiarity and usability. ASP's will
continue to grow in popularity as well, especially with mid-sized businesses, so
companies like NetSuite, SalesNet and Siebel's OnDemand will thrive. CRM on the
web has come of age!
This article on the "The History of CRM" reprinted with
permission.

Copyright © 2004-2005 Evaluseek Publishing.

[ad_2]

Source by Lucy P. Roberts

The Benefits of Vtiger CRM for Your Business

[ad_1]

The Vtiger CRM is a type of enterprise-ready Open Source CRM software principally designed for small and medium sized companies. It combines the advantages of Open Source software with additional enterprise features which adds more value to the end user.

It is a professional CRM application that is fully featured with no ongoing license fees and 100% Open Source. Furthermore, the setup cost is very low and there are no per-seat fees. If you want to customize it to suit your business process and systems, there is the opportunity for you. It is fully integrated with a range of third party software systems and there is also the availability of onsite or hosted cloud solutions.

Moreover, there is no upfront capital expenditure as well as the possibility of allowing unlimited users and unlimited traffic. No matter where you are in the world or what language you speak, the Vtiger CRM is international and multilingual. It runs on an SSL secured 128bit encrypted web access and has a short implementation time thereby giving you a quick ROI. It is also integrated to ERP systems with the provision of web portal for customers and partners as well as the integration of Microsoft Outlook, Office, Mozilla Firefox and Thunderbird.

The installation of Vtiger CRM is very easy as all the necessary software like Apache, MySQL and PHP are integrated and executables are made accessible for Windows and Linux (RedHat, Debian, SuSe, Fedora and Manddrake) operating systems in SourceForge.net. As a result of this, you do not need to be concerned about setting up database, web server and other software.

Furthermore, the Vtiger CRM provides customer relationship management solution for small and medium sized companies with a well loaded features on a secured, customizable platform. It is a web-based, platform-independent CRM and Groupware system that is centred on Open Source technologies which helps you in formulating strategies for cross-departmental processes that will allow you to methodically develop your existing and new customer relationships.

It supports your business 'internal processes and employees in sales, marketing, customer service and back-office personnel, to better organize your customers' data like accounts and contacts, sales leads, potentials and pipelines, quotes, sales orders as well as trouble tickets and products knowledgebase, and so much more. As a result of this, if you want an improved customer service which will in turn lead to more sales and profitability for your business, a Vtiger CRM is surely your best bet.

[ad_2]

Source by Olushola George Otenaike

Programming Languages ​​and Frameworks You Should Learn In 2016

[ad_1]

The programming languages ​​and frameworks trend for 2016 seems to be heading more frontend development over backend development. Below is just a simplified list of what you should take note of and consider improving your knowledge on.

Languages ​​and Platforms

PHP 7 is the latest version of PHP. Big websites like Facebook, Google and Apple use PHP. PHP 7 is also two times faster than the previous version 5.6 – this will have a huge improvement on CMS systems like WordPress and Drupal.

JavaScript also has a new update called ES2015 (previously ES5). Some incredible sites that use JavaScript are Lost Worlds Fairs and Cascade Brewery Co.

Python 3.5 was released in 2015 with some juicy features like Asyncio. Nearly all libraries are available for Python 3 so it might be a good time to upgrade your legacy code base now.

Node.js has the largest ecosystem of open source libraries in the world. Node.js is always a good study choice and with its long term support release, it provides added stability going forward. LinkedIn and Walmart use some aspects of Node.js on their websites.

Swift 2 was released earlier this year and it's growing rapidly (it's the fastest growing programming language in history!). It's open source and it has already been ported on Linux which means that it is now possible to build backends and server side software. It's built by Apple (not the granny smith apple) and they have big plans for it so it would be good to take note of it as the popularity grows.

HTML5 is last and certainly not the least. It's the one you need to watch out for! YouTube switched from Flash to HTML5 this year and Adobe Animate's exports are now defaulted to HTML5. It's also one of the fastest growing job trends on indeed.com which shows its popularity. HTML5 is probably one of the best long term languages ​​to study within the next 3 years. Some sites that make use of HTML5 are Ford, Peugeot and Lacoste – they are really cool.

Frontend Frameworks (CSS Frameworks)

These complete frameworks offer features like icons and other reusable components for navigation, sets of forms, styled-typography, buttons, popovers, alerts and more.

Bootstrap has become very popular in 2015 and this popularity is only going to increase in 2016 as it is turning into a web development standard. Version 4 is coming out soon and it will integrate with SASS. It's quite easy to learn and it comes with some neat extensions and examples too.

Foundation is an alternative to Bootstrap. In 2015 they launched Version 6, which focuses on modularity so that you can only include the pieces that you need for a faster loading time and it's also built with SASS.

Skeleton is a sexy (there's no other word to explain it) boilerplate for responsive, mobile-friendly development. Skeleton is a small collection of CSS files that help you to develop sites quickly and beautifully that look incredible on all screen sizes.

Backend Frameworks

Backend frameworks or application layers is the 'brain' of the website. It's how the website operates and the logic behind it. You are developing the 'brain' whereas in Frontend, you are creating the 'face'.

Depending on which language you prefer, there are plenty of choices. Below is a list of a few languages ​​with some of their frameworks:

PHP: Symfony, Zend, Laravel, Slim, Codeigniter and CakePHP
Node.js: Express, Hapi, Sails.js and Total.js
JavaScript: Angular.js, Vue.js, Polymer, React and Ember.js
Ruby: Rails and Sinatra
Java: Play, Spring and Spark
Python: Django and Flask

Frameworks can be very useful, but it does not necessarily mean that it will be useful for you. Ultimately, it is the developer's decision on whether or not to use a framework. This will depend on several factors depending on what you want to achieve. Go through each framework and see if it aligns with what you want to achieve before you start utilizing it.

CMS (Content Management Systems)

This article would not be complete without mentioning 2 popular CMSs like WordPress and Drupal. Both are written in PHP and with the new PHP 7 release, it's even faster.

WordPress has evolved from a dry blogging CMS to a fully-fledged CMS / Framework with plugins that make almost anything possible. Thousands of developers make a living as a WordPress developer by creating premium themes or plugins. You can also use WordPress as a REST API backend.

Drupal 8 was released in 2015. It makes use of Symfony 2, Composer packages and the Twig templating engine. A few websites that are run on Drupal are: Johnson & Johnson, BBC Store and World Economic Forum. Drupal is ideal for content heavy websites.

If you are in doubt about what to spend time studying in 2016, we've made a list of 5 frameworks we believe you should invest your time in:

  1. Bootstrap
  2. Angular.js
  3. Ruby on Rails
  4. HTML5
  5. Laravel

As a 6th recommendation, we recommend that you add Git to your list of what to learn in 2016. It's growing like crazy and it's only going to grow in popularity. Companies like Google, Facebook, Microsoft, Twitter and LinkedIn make use of Git.

This is just a short summary of programming languages ​​and frameworks we think you should learn in 2016. Of course there are hundreds of other languages ​​and frameworks out there, but I hope this was of value to you.

[ad_2]

Source by Kyle Prinsloo

Asterisk Cisco Vs – Avaya VOIP Telephone Systems

[ad_1]

VoIP or Voice Over IP, the latest in wireless communication works by taking the phone call, changing from analog to digital signals and transmitting these signals over an IP network or broadband and finally terminating it on a PSTN. Call charges are greatly reduced using this technology. The advantage is that software emulating a phone can be loaded on your laptop thereby enabling you to access its services even while you travel.

VoIP SIP uses (Session Initiation Protocol), a peer-to-peer technology that allows computers to communicate with each other without having calls routed through some central station. Therefore, calling from one SIP enabled phone to another cuts call charges drastically.

The Asterisk System comes with an Asterisk server which manages things like teleconferencing, voice mails, queues and hold music. The hard phone is a digital phone that has an Ethernet jack to communicate with the server using the SIP protocol. They, including the wireless version, are not very expensive. The soft phone are implemented in software and can be attached to a PC. Asterisk runs predominantly on Linux, an open source operating system.

Cisco has telephony solutions that are network based and run on a router. They are scalable and work well in multi-user environments in mulitple locations. The UC500 suite is a bundle of services like router, switches, security, telephony and wireless functionality in a single device. This greatly reduces costs for a company which is planning on these services. The Cisco CallManager Express uses SIP to connect phones through the Internet and also has the features of UC 500 making it more viable for medium scale businesses. Additonal features are paging, intercom, ICMP and class of restrictions on a user's calls.

Avaya IP Office uses IP technology to deliver voice and data communication, messaging and customer management over multiple locations with 2 to 300 people. It allows you to work from anywhere, host conferences, integrate applications, measure and improve customer satisfaction at the touch of a button. It is cost effective as it lowers long-distance calls, conferencing fees, supports remote workers and helps keep your business collaborated and up-to-date.

The three products can be compared based on the following few criteria:

• Number of extensions: Asterix can support upto 100 extensions while Cisco and Avaya can go upto 360 extensions thereby suporting large organizations as well. This improves the scalability and helps to reduce costs in the long run.
• Freeware: Asterisk is freeware and runs on a Linux server. This makes the telephony solution cheaper than either Cisco or Avaya which make extensive use of routers and switches for communication.
• Installation and maintenance: Asterisk is a programmer's dream as it is open source and can be changed at his will. However, for an end-user, it may be a nightmare. Support and services are better Cisco with and Avaya which are established names in the industry.

The main thing going for Asterisk is its cost. However, it is not always advisable to look at the initial cost of things. Other criteria like scalability, integrating of one device with others already existing, interoperatability and long run costing should be considered while choosing one product over another.

[ad_2]

Source by Scott Camball

File Integrity Monitoring – PCI DSS Requirements 10, 10.5.5 and 11.5

[ad_1]

Although FIM or File-Integrity Monitoring is only mentioned specifically in two sub-requirements of the PCI DSS (10.5.5 and 11.5), it is actually one of the more important measures in securing business systems from card data theft.

What is it, and why is it important?

File Integrity monitoring systems are designed to protect card data from theft. The primary purpose of FIM is to detect changes to files and their associated attributes. However, this article provides the background to three different dimensions to file integrity monitoring, namely:

– Secure hash-based FIM, used predominantly for system file integrity monitoring
– File contents integrity monitoring, useful for configuration files from firewalls, routers and web servers
– File and / or folder access monitoring, vital for protecting sensitive data

Secure Hash Based FIM

Within a PCI DSS context, the main files of concern include:

– System files eg anything that resides in the Windows / System32 or SysWOW64 folder, program files, or for Linux / Unix key kernel files

The objective for any hash-based file integrity monitoring system as a security measure is to ensure that only expected, desirable and planned changes are made to in scope devices. The reason for doing this is to prevent card data theft via malware or program modifications.

Imagine that a Trojan is installed onto a Card Transaction server – the Trojan could be used to transfer card details off the server. Similarly, a packet sniffer program could be located onto an EPoS device to capture card data – if it was disguised as a common Windows or Unix process with the same program and process names then it would be hard to detect. For a more sophisticated hack, what about implanting a 'backdoor' into a key program file to allow access to card data ??

These are all examples of security incidents where File-Integrity monitoring is essential in identifying the threat.

Remember that anti-virus defenses are typically only aware of 70% of the world's malware and an organization hit by a zero-day attack (zero-day marks the point in time when a new form of malware is first indentified – only then can a remediation or mitigation strategy be formulated but it can be days or weeks before all devices are updated to protect them.

How far should FIM measures be taken?

As a starting point, it is essential to monitor the Windows / System32 or SysWOW64 folders, plus the main Card Data Processing Application Program Folders. For these locations, running a daily inventory of all system files within these folders and identifying all additions, deletions and changes. Additions and Deletions are relatively straightforward to identify and evaluate, but how should changes be treated, and how do you assess the significance of a subtle change, such as a file attribute? The answer is that ANY file change in these critical locations must be treated with equal importance. Most high-profile PCI DSS security breaches have been instigated via an 'inside man' – typically a trusted employee with privileged admin rights. For today's cybercrime there are no rules.

The industry-acknowledged approach to FIM is to track all file attributes and to record a secure hash. Any change to the hash when the file-integrity check is re-run is a red alert situation – using SHA1 or MD5, even a microscopic change to a system file will denote a clear change to the hash value. When using FIM to govern the security of key system files there should never be any unplanned or unexpected changes – if there are, it could be a Trojan or backdoor-enabled version of a system file.

Which is why it also crucial to use FIM in conjunction with a 'closed loop' change management system – planned changes should be scheduled and the associated File Integrity changes logged and appended to the Planned Change record.

File Content / Config File Integrity Monitoring

Whilst a secure hash checksum is an infallible means of identifying any system file changes, this does only tell us that a change has been made to the file, not what that change is. Sure, for a binary-format executable this is the only meaningful way of conveying that a change has been made, but a more valuable means of file integrity monitoring for 'readable' files is to keep a record of the file contents. This way, if a change is made to the file, the exact change made to the readable content can be reported.

For instance, a web configuration file (php, aspnet, js or javascript, XML config) can be captured by the FIM system and recorded as readable text; thereafter changes will be detected and reported directly.

Similarly, if a firewall access control list was edited to allow access to key servers, or a Cisco router startup config altered, then this could allow a hacker all the time needed to break into a card data server.

One final point on file contents integrity monitoring – Within the Security Policy / Compliance arena, Windows Registry keys and values ​​are often included under the heading of FIM. These need to be monitored for changes as many hacks involve modifying registry settings. Similarly, a number of common vulnerabilities can be identified by analysis of registry settings.

File and / or Folder Access Monitoring

The final consideration for file integrity monitoring is how to handle other file types not suitable for secure hash value or contents tracking. For example, because a log file, database file etc will always be changing, both the contents and the hash will also be constantly changing. Good file integrity monitoring technology will allow these files to be excluded from any FIM template.

However, card data can still be stolen without detection unless other measures are put in place. As an example scenario, in an EPoS retail system, a card transaction or reconciliation file is created and forwarded to a central payments server on a scheduled basis throughout the trading day. The file will always be changing – maybe a new file is created every time with a time stamped name so everything about the file is always changing.

The file would be stored on an EPoS device in a secure folder to prevent user access to the contents. However, an 'inside man' with Admin Rights to the folder could view the transaction file and copy the data without necessarily changing the file or its attributes. Therefore the final dimension for File Integrity Monitoring is to generate an alert when any access to these files or folders is detected, and to provide a full audit trail by account name of who has had access to the data.

Much of PCI DSS Requirement 10 is concerned with recording audit trails to allow a forensic analysis of any breach after the event and establish the vector and perpetrator of any attack.

[ad_2]

Source by Mark Kedgley