The SAP System OS Collector – SAPOSCOL in a Nutshell

[ad_1]

The SAP System OS collector (SAPOSCOL) is a platform independent stand-alone program that runs in OS background and collects the system information using a segment of the shared memory for various applications and all SAP instances on a host. These information details can be viewed through transaction code ST06 / OS06 in frontend SAPGUI. It is a very useful tool for NetWeaver / Basis Administrators & consultants to monitor server performance. SAPOSCOL extracts real-time data from system, although it does not refreshes automatically, you need to click the 'Refresh' button to get the updated data. SAPOSCOL collects system data every 10 seconds and records it, and also records the hourly averages for the last 24 hours. It runs autonomously from SAP instances exactly one process per host and collects data from various operating system resources. User can monitor all the servers under SAP landscape with this tool. But for remote server (livecache server) the transaction code is OS07. You can check CPU utilization, Physical & virtual memory usage, Pool data / Swap size, disk response time, utilization of physical disks and file systems, resource load for running processes and even LAN data from the monitoring list.

You can navigate to this tool from SAP Menu-> Tools-> CCMS-> Control / Monitoring-> Performance-> Operating System-> Local-> Activity.

If you can not see any data, that means the OS Collector (SAPOSCOL) is not running (error code: Shared memory not available). In this situation your main task is to fix the saposcol to run properly. This usually happens after a new SAP installation or Kernel upgrades. If you are new with the SAP Systems the following guideline will be helpful to overcome the saposcol issue.

Unix / Linux / AIX / Sun / Solaris System:

First, Check the permission of saposcol.exe file, it should be 777 (owner is root in group sapsys) and sticky bit should be set to 4750. If you want to know which user is running saposcol, use "ps -ef | grep saposcol ". Now to change the saposcol file to owner root, group sapsys, mode 4750, log in as root to your unix system and execute the commands as below,

cd / usr / sap // SYS / exe / run
chown root saposcol
chgrp sapsys saposcol
chmod 4750 saposcol

You can also run the "saproot.sh" in the exe dir to set the permissions. Then run saposcol -l as the same owner (root). Check collector status using saposcol -s. After setting the file permissions, you can also use, ST06 -> Operating System Collector -> Click on 'Start' to run SAPOSCOL.

To stop the OS collector use saposcol -k. If this command failed to kill the process, you can execute "cleanipc 99 remove" (Check SAP Note 548699). If this attempt also fails, then you need to remove the shared memory key of saposcol. Execute command "ipcs -ma" and note down Shared Memory ID in the line that contains saposcol key. Then execute the command "ipcrm -m ID". Shared memory key will be created again next time when you run saposcol.

Sometimes using "saposcol -l" gives a message that it's already running, but when you grep the process using "ps -ef | grep -i saposcol" it may not show the process. In this situation, you can use a undocumented parameter "saposcol -f", where "f" stands for starting the process forcefully. When it starts, then stop the process in regulation methon using "saposcol -k" and then start it normally using "saposcol -l".

If saposcol still does not run, then you need to start it in dialog mode. Login with use adm and follow the steps below,

saposcol -d
Collector> clean
Collector> quit
saposcol -k to stop the collector.
Before restarting
saposcol -d
Collector> leave (You should get a message- Shared memory deleted)
Collector> quit
cd / usr / sap / tmp
mv coll.put coll.put.sav
cd
saposcol

"Coll.put", if this file contains the old shared memory and should be deleted in order to get a clean start (Check SAP Note 548699, point 7). If you are unsuccessful in clearing shared memory, please try the following commands to clear the shared memory:

$ Saposcol -kc
$ Saposcol -f

If this also fails, then you need to restart the system from OS level and seems like also need a new version of saposcol (Check SAP Note 19227).

IBM iSeries i5 / OS (OS / 400, OS / 390):

– Check permissions of directory '/ usr / sap / tmp' and the file 'saposcol.exe', it should be 4755 and owner must be root in sapsys group. Check SAP Note 790639. After assigning permissions you can run from OS command line using 'SAPOSCOL -l'. To show the status use 'SAPOSCOL -s' and to stop the process use 'SAPOSCOL -k'. You can also run the process by submitting a job in OS level using
CALL PGM (SAPOSCOL) PARM (-l)
It submits the job in job queue QBATCH in library QGPL.

– In iSeries, you might experience strange data when analyzing CPU utilization using tcode ST06 / OS06. Even you are using multiple CPU's, SAPOSCOL might only report CPU usage for the first CPU. Also sometimes you will find CPU utilizations reported above 100% in some intervals, if you are running SAP instance in an uncapped partition where multiple logical partitions are using a shared processor pool. In this situation, be sure that CPU usage reported for CPU number 0 is the average usage for all CPU's being used in the system. If you want to view shared CPU partition information, apply support packages as per SAP Note 994025 including following patch levels

6.40 disp + work package (DW): 182 SAPOSCOL: 69
7.00 disp + work package (DW): 109 SAPOSCOL: 34

By applying these patches and support packages into the system, new transactions, OS06N, ST06N, and OS07N are available to view additional information in two sections titled "Host system" and "Virtual system". These include information about the partition type and the available and consumed CPU in the current partition as well as in the shared processor pool. So, if you are a iSeries user and your SAPOSCOL is not running, highest probability is that you need to put the latest kernel & saposcol patch. (SAP Note 708136 & 753917)

– Another scenario in iSeries, when your saposcol is not running, and you can not start it from ST06 / OS06. Problem might be with the authorization list R3ADMAUTL was not accurate. You can solve it by this way,

1) Remove QSECOFR * ALL X
2) Change * PUBLIC from * USE to * EXCLUDE
3) Add R3OWNER * ALL X

Now you can start saposcol using the tcode ST06 / OS06. And also you can start the process from command line,

CALL PGM (/ SAPOSCOL PARM ( '- l')

If this does not solve the problem check if both programs QPMLPFRD and QPMWKCOL in library QSYS have * USE- authority assigned for user R3OWNER (SAP Note: 175852). If not then you have to run the following commands:

GRTOBJAUT OBJ (QSYS / QPMLPFRD) OBJTYPE (* PGM) USER (R3OWNER) AUT (* USE)
GRTOBJAUT OBJ (QSYS / QPMWKCOL) OBJTYPE (* PGM) USER (R3OWNER) AUT (* USE)

Then you should verify if the user R3OWNER is part of authority list R3ADMAUTL (SAP Note: 637174). After this if you receive the error "SAPOSCOL not running? (Shared memory not available), then follow the steps below,

1) Remove the shared memory (coll.put) as per SAP Note: 189072. 'coll.put' location is: '/ usr / sap / tmp'.
2) End the jobs QPMASERV, QPMACLCT, QYPSPFRCOL and CRTPFRDTA in QSYSWRK if running.
3) Delete the temporary user space, WRKOBJ OBJ (R3400 / PERFMISC *) OBJTYPE (* USRSPC)
4) ENDTCPSVR * MGTC
5) CALL QYPSENDC PARM ( '* PFR' '') [There are 6 blanks after * PFR and there are 6 blanks making up the second parameter]
6) ENDJOB JOB (xxxxxx / QSYS / QYPSPFRCOL) OPTION (* IMMED) SPLFILE (* YES) [This command must be run for all QYPSPFRCOL jobs found on the system even if they show with * OUTQ as their status]
7) ENDJOB JOB (xxxxxx / QSYS / CRTPFRDTA) OPTION (* IMMED) SPLFILE (* YES) [This command must be run for all CRTPFRDTA jobs even if they show with * OUTQ as their status]
8) RNMOBJ OBJ (QUSRSYS / QPFRCOLDTA) OBJTYPE (* USRSPC) NEWOBJ (QPFRCOLDTX)
9) RNMOBJ OBJ (QUSRSYS / QPFRCOLDTA) OBJTYPE (* DTAQ) NEWOBJ (QPFRCOLDTX) [This object may or may not exist at this time]
10) CALL QYPSCOLDTA * note This program will create a new * USRSPC. After collection services is started there should be a new * DTAQ.
11) Start collection services using GO PERFORM, opt 2, and opt 1; OR CALL QYPSSTRC PARM ( '* PFR' '* STANDARDP' '') [There are 6 blanks after * PFR and there are 6 blanks making up the second parameter]. Or, Start collection services from Operations Navigator.
12) STRTCPSVR * MGTC
13) End and restart Operations Navigator if running. IBM authorized See program analysis report (APAR) SE12188 for more information.
14) Now start SAPOSCOL from ST06 / OS06.

Windows System:

– Go to the Kernel folder in command line where you will find saposcol.exe. Set full owner permission
for the file & folder. Then run saposcol -l (saposcol -d in dialog mode)

– You can also try Start / Stop SAPOSCOL service from Control Panel -> Administrative Tools -> Services (services.msc).

If all other attempts fail, then make sure you have the correct version of SAPOSCOL. Get latest SAPOSCOL from SAP Service Marketplace for your OS. Download the SAPOSCOL.SAR file for your Kernel and save in a directory. Then STOP SAP & SAPOSCOL. Check for any Kernel library locks and do not forget to take library backup. Then run APYR3FIX and then APYSAP. Check OSS Note 19466.

SAPOSCOL also can be terminated due to small amount of internal memory allocation. When this memory filled gradually during the runtime of SAPOSCOL, system writes data outside the buffer. As a result the following buffer is cleared and SAPOSCOL terminates with a dump. Apply the following patches with at least the patch levels specified below:

SAP Release 640: SAPOSCOL patch level 100 and DW patch level 293
SAP Release 700: SAPOSCOL patch level 75 and DW patch level 151
SAP Release 701: SAPOSCOL patch level 18 and ILE patch level 53
SAP Release 710: SAPOSCOL patch level 36 and ILE patch level 161
SAP Release 711: SAPOSCOL patch level 12 and ILE patch level 48

So, it's obvious that if we use different SAP Systems in one server with incompatible mixture of Kernel versions, SAPOSCOL will face crisis and will not provide data for all systems, though SAP system functions will run without any trouble. This happens because we are using new IBM technology with EXT Kernels, so it will not allow SAPOSCOL to reside in single level store (SLS), rather than put it to Teraspace. In this situation it's obvious that if you run an EXT system with some other non-EXT systems, saposcol will run only in one system. To overcome this issue you need to upgrade to EXT Kernel for all SAP systems with latest patches. Then set proper authorization for SAPOSCOL file & directory as guided which will solve any problem related to SAP OS Collector.

[ad_2]

Source by Masudur Rahman

How to Run Windows Programs on Android

[ad_1]

Windows is the most popular operating system for PCs and laptops while Android is the most widely used platform for the smartphones and tablets.

Many people still rely on Windows apps for different purposes but is it possible to use them on handheld devices?

Yes! You can do it with a fast internet connection and a virtualization software. Here I will comprehensively guide you how.

Procedure:

Connect your Windows machine to your Smartphone or tablet using Microsoft Remote Desktop app.

This application gives you access to all the programs installed on your PC, but it works only with the certain edition of Windows. Particularly for Windows 7, you will need Ultimate, Professional, and Enterprise while for Windows 8; this utility is available in Pro and Enterprise editions only. In Windows 10, this utility is not available natively.

Given the fact that most of the people use home or basic editions on their PCs, it is not an ideal option.

Although Remote Desktop apps can be used on all Android devices, it will be better if you connect a tablet to your PC using this software instead of a smartphone because you will have to zoom and pan again and again while trying to navigate a Windows which is a fairly difficult task to do on a small screen.

After connecting the two devices, you can run Windows programs using Crossover, a software developed by CodeWeavers. This program was considered as a useful tool to run Windows programs on Mac or LINUX in past and made its appearance on Android just at the end of last year.

Crossover requires an x86 processor and at least 2GB of RAM to run most real-world Windows applications, which limits the availability of this option to certain Android devices.

Like Crossover, another software WINE, which is used to run Windows programs on LINUX, is also poised to make inroads to Android soon.

Dual-boot Android and Windows Tablets

There are several tablets in the market which allow you to switch between Windows and Android operating systems.

Cube i10 is one of the popular dual-boot tablets. Powered by Intel Z3735 Quad Core 1.8GHz processor, this 10.6-inch device runs on Android 4.4.4 and Windows 8.1 Bing, features 2GB RAM and 32 GB ROM and comes up with a price tag of $ 130.

You can search more dual-boot tablets on the sites like GearBest, Geekbuying, and TinyDeal.

[ad_2]

Source by Adam Saad

5 Open Source Firewalls You Should Know About

[ad_1]

Despite the fact that pfSense and m0n0wall appear to receive the lion's share of consideration in the open source firewall / router market, with pfSense edging out m0n0wall in recent years, there are several excellent firewall / router distributions obtainable under both Linux and BSD. All of these projects build on their respective OSes native firewalls. Linux, for instance, incorporates netfilter and iptables into its kernel. OpenBSD, on the other hand, uses PF (Packet Filter), which replaced IPFilter as FreeBSD's default firewall in 2001. The following is a (non-exhaustive) list of a few of the firewall / router distributions available for Linux and BSD, along with some of their capabilities.

[1] Smoothwall

The Smoothwall Open Source Project was set up in 2000 in order to develop and maintain Smoothwall Express – a free firewall that includes its own security-hardened GNU / Linux operating system and an easy-to-use web interface. SmoothWall Server Edition was the initial product from SmoothWall Ltd., launched on 11-11-2001. It was essentially SmoothWall GPL 0.9.9 with support provided from the company. SmoothWall Corporate Server 1.0 was released on 12-17-2001, a closed source fork of SmoothWall GPL 0.9.9SE. Corporate Server included additional features such as SCSI support, along with the capability to increase functionality by way of add-on modules. These modules included SmoothGuard (content filtering proxy), SmoothZone (multiple DMZ) and SmoothTunnel (advanced VPN features). Further modules released over time included modules for traffic shaping, anti-virus and anti-spam.

A variation of Corporate Server called SmoothWall Corporate Guardian was released, integrating a fork of DansGuardian known as SmoothGuardian. School Guardian was created as a variant of Corporate Guardian, adding Active Directory / LDAP authentication support and firewall features in a package designed especially for use in schools. December 2003 saw the release of smoothwall Express 2.0 and an array of comprehensive written documentation. The alpha version of Express 3 was released in September 2005.

Smoothwall is designed to run effectively on older, cheaper hardware; it will operate on any Pentium class CPU and above, with a recommended minimum of 128 MB RAM. Additionally there is a 64-bit build for Core 2 systems. Here is a list of features:

  • Firewalling:
    • Supports LAN, DMZ, and Wireless networks, plus external
    • External connectivity via: Static Ethernet, DHCP Ethernet, PPPoE, PPPoA using various USB and PCI DSL modems
    • Port forwards, DMZ pin-holes
    • Outbound filtering
    • Timed access
    • Simple to use Quality-of-Service (QoS)
    • Traffic stats, including per interface and per IP totals for weeks and months
    • IDS via automatically updated Snort rules
    • UPnP support
    • List of bad IP addressed to block
  • Proxies:
    • Web proxy for accelerated browsing
    • POP3 e-mail proxy with Anti-Virus
    • IM proxy with real time log-viewing
  • UI:
    • Responsive web interface using AJAX techniques to provide real time information
    • Real time traffic graphs
    • All rules have an optional Comment field for ease of use
    • Log viewers for all major sub-systems and firewall activity
  • Maintenance:
    • Backup config
    • Easy single-click application of all pending updates
    • Shutdown and reboot for UI
  • Other:
    • Time Service for network
    • Develop Smoothwall yourself using the self-hosting "Devel" builds

[2] IPCop

A stateful firewall created on the Linux netfilter framework that was originally a fork of the SmoothWall Linux firewall, IPCop is a Linux distribution which aims to provide a simple-to-manage firewall appliance based on PC hardware. Version 1.4.0 was introduced in 2004, based on the LFS distribution and a 2.4 kernel, and the current stable branch is 2.0.X, released in 2011. IPCop v. 2.0 incorporates some significant improvements over 1.4, including the following:

  • Based on Linux kernel 2.6.32
  • New hardware support, including Cobalt, SPARC and PPC platforms
  • New installer, which allows you to install to flash or hard drives, and to choose interface cards and assign them to particular networks
  • Access to all web interface pages is now password protected
  • A new user interface, including a new scheduler page, more pages on the Status Menu, an updated proxy page, a simplified DHCP server page, and an overhauled firewall menu
  • The inclusion of OpenVPN support for virtual private networks, as a substitute for IPsec

IPCop v. 2.1 includes bugfixes and a number of additional improvements, including being using the Linux kernel 3.0.41 and URL filter service. Additionally, there are many add-ons obtainable, such as advanced QoS (traffic shaping), e-mail virus checking, traffic overview, extended interfaces for controlling the proxy, and many more.

[3] IPFire

IPFire is a free Linux distribution which can act as a router and firewall, and can be maintained via a web interface. The distribution offers selected sever daemons and can easily be expanded to a SOHO server. It offers corporate-level network protection and focuses on security, stability and ease of use. A variety off add-ons can be installed to add more features to the base system.

IPFire employs a Stateful Packet Inspection (SPI) firewall, which is built on top of netfilter. During the installation of IPFire, the network is configured into separate segments. This segmented security scheme means there is a place for each machine in the network. Each segment represents a group of computers that share a common security level. "Green" represents a safe area. This is where all regular clients will reside, and is usually comprised of a wired local network. Clients on Green can access all other network segments without restriction. "Red" indicates danger or the connection to the Internet. Nothing from Red is permitted to pass through the firewall unless specifically configured by the administrator. "Blue" represents the wireless part of the local network. Since the wireless network has the potential for abuse, it is uniquely identified and specific rules govern clients on it. Clients on this network segment must be explicitly allowed before they may access the network. "Orange" represents the demilitarized zone (DMZ). Any servers which are publicly accessible are separated from the rest of the network here to limit security breaches. Additionally, the firewall can be used to control outbound internet access from any segment. This feature gives the network administrator complete control over how their network is configured and secured.

One of the unique features of IPFire is the degree to which it incorporates intrusion detection and intrusion prevention. IPFire incorporates Snort, the free Network Intrusion Detection System (NIDS), which analyzes network traffic. If something abnormal happens, it will log the event. IPFire allows you to see these events in the web interface. For automatic prevention, IPFire has an add-on called Guardian which can be installed optionally.

IPFIre brings many front-end drivers for high-performance virtualization and can be run on several virtualization platforms, including KVM, VMware, Xen and others. However, there is always the possibility that the VM container security can be bypassed in some way and a hacker can gain access beyond the VPN. Therefore, it is not suggested to use IPFire as a virtual machine in a production-level environment.

In addition to these features, IPFire incorporates all the functions you expect to see in a firewall / router, including a stateful firewall, a web proxy, support for virtual private networks (VPNs) using IPSec and OpenVPN, and traffic shaping.

Since IPFire is based on a recent version of the Linux kernel, it supports much of the latest hardware such as 10 Gbit network cards and a variety of wireless hardware out of the box. Minimum system requirements are:

  • Intel Pentium I (i586)
  • 128 MB RAM
  • 2 GB hard drive space

Some add-ons have extra requirements to perform smoothly. On a system that fits the hardware requirements, IPFire is able to serve hundreds of clients simultaneously.

[4] Shorewall

Shorewall is an open source firewall tool for Linux. Unlike the other firewall / routers mentioned in this article, Shorewall does not have a graphical user interface. Instead, Shorewall is configured through a group of plain-text configuration files, although a Webmin module is available separately.

Since Shorewall is essentially a frontend to netfilter and iptables, usual firewall functionality is available. It is able to do Network Address Translation (NAT), port forwarding, logging, routing, traffic shaping and virtual interfaces. With Shorewall, it is easy to set up different zones, each with different rules, making it easy to have, for example, relaxed rules on the company intranet while clamping down on traffic coming for the Internet.

While Shorewall once used a shell-based compiler frontend, since version 4, it also uses a Perl-based frontend. IPv6 address support started with version 4.4.3. THe most recent stable version is 4.5.18.

[5] pfSense

pfSense is an open source firewall / router distribution based on FreeBSD as a fork on the m0n0wall project. It is a stateful firewall that incorporates much of the functionality of m0n0wall, such as NAT / port forwarding, VPNs, traffic shaping and captive portal. It also goes beyond m0n0wall, offering many advanced features, such as load balancing and failover, the capability of only accepting traffic from certain operating systems, easy MAC address spoofing, and VPN using the OpenVPN and L2TP protocols. Unlike m0n0wall, in which the focus is more on embedded use, the focus of pfSense is on full PC installation. Nevertheless, a version is provided targeted for embedded use.

[ad_2]

Source by David Zientara

The Benefit of Linux Managed Hosting For Small Businesses

[ad_1]

This trying economy has meant stepping up time and cost effective methods of operating a company. This is especially true for small businesses that have less of a cushion when profits run low. One of the most efficient ways to reduce operating costs is to increase the efficiency of your hosting. Business operations are often compromised by servers that frequently crash, and more often than not hosting is more costly than it ought to be. Linux managed hosting is an optimal way for companies to save money while getting the accommodations that they need.

Without a doubt this is an option that is growing in popularity by leaps and bounds. When companies know that they must have a managed hosting service, Linux provides everything that small and developing companies need. Better still, the cost is effective.

Smaller companies generally do not need the supreme security servers that larger companies provide. The MS-SQL database offers benefits that start-up and small scale operations can not often justify the larger expense for. With Linux, it is truly a question of why pay more if you are getting everything that you need?

Linux operates on an open source factor. This means that individuals will find this system to be highly adaptable. It is relative open for new and developing software and technologies. This means that just about anyone can make this transition without the assistance of special training. It also means that investing in Linux now will still be beneficial after your company begins introducing new hardware and software.

Managed hosting linux offers Essentially the SAME benefits color : as Windows. There are also available features that Windows does not have. It can also be far less difficult to use and is prone to crashing far less often. This will help office staff maximize on the available work hours, because the systems will always be up and running, ready to conduct valuable and timely business transactions. By investing in Linux managed hosting services small companies can get cost effective and time efficient use of the existing resources.

[ad_2]

Source by Philip J Morris

Turn a Physical Linux or Windows Machine Into A Virtual Machine for Free

[ad_1]

We will be focusing on creating this masterpiece in the Windows environment, but do not worry the same principles can be used in any operating system that can run Virtual Box.

List of Software and Hardware needed:

Software:

-Virtual Box and Extension Pack

-Windows 7 or higher PC or most any Linux Distro

-Redo Backup and Recovery ISO

-YUMI Installer

Hardware:

-USB Flash drive

-USB Hard drive

The overall benefits of performing this procedure is three fold. One, cost savings on power, climate control and space required will be seen instantly. Two, manageability and scalability dramatically increases due to working with virtual disks and virtual networks that can scaled up or down with finer grained control. Three, redundancy and faster disaster recovery that is provided by cloud services. Especially when tied into your already existing network infrastructure for a seamless transition when disaster strikes.

While this process can be completed in numerous ways with different software, this is the way that I am familiar with and all the tools needed are free.

Sounds daunting? No sweat, but where do we start first?

Well, we need to get an image of the physical machine onto removable media (USB hard drive). I recommend a USB hard drive vs. just a USB flash drive due to the space the image will take up. We will also need a USB flash drive at least 2 GB in size to use as a bootable media for Redo Backup and Recovery.

Plug the USB hard drive into your USB port and open up the folder structure. Create a folder in a location that you can remember Ie D: "Your Computer's Name". This is the location where we will install the files from our initial physical image copy to. After this is complete, eject your USB hard drive by right clicking on the "Safely Remove Hardware" icon in your taskbar and click on Eject "whatever your USB hard drive is named", unplug the USB HDD.

Next, we need to create a bootable USB to load Redo Backup and Recovery on. Download a small program called "YUMI". YUMI will create a bootable USB flash drive for Redo Backup and Recovery on it. Also grab a copy of Redo Backup and Recovery, save both files to your desktop or location of choice.

Now, run YUMI and choose your USB flash drive from the list (Remember to choose your USB drive and not your USB HDD that should be unplugged anyway!). Choose "Redo Backup and Recovery" from the software list that you can create an installer for. Click the "Browse" button to look for the Redo Backup and Recovery.iso to include on the install. Finally click on "create" to start the bootable Redo Backup and Recovery bootable USB creation process. When this is done, YUMI will ask you if you want to add any more distros, just say "no". Eject your USB out of the computer using the "Safely Remove Hardware" icon in your taskbar and click on Eject "whatever your USB flash drive is named" and unplug the USB flash drive. Please keep Redo Backup and Recovery.iso we will need it later.

Make sure that the physical computer that you would like to virtualize is in a powered down state, if not please power down the computer. Insert only the USB flash drive into the computer. Power up the computer and press the correct key to access to boot menu or make sure that the USB drive is set to boot before the computers internal hard drive. Choose the USB entry to boot from, YUMI should now load. Choose the entry for "Tools" then "Redo Backup and Recovery". Press enter on the Redo menu to start the mini recovery O / S. When Redo Backup and Recovery is loaded, insert your USB HDD and give it about 20 seconds.

Open Redo Backup and Recovery Software:

1. Choose "Backup"

2. Choose your disk to backup (your physical computer's disk)

3. Choose your partitions to backup (typically it would be all partitions and MBR)

4. On the "Destination Drive" screen choose "Connected directly to my computer" and click browse.

5. Locate the file folder we made earlier Ie D: "Your Computer's Name" click OK.

6. Choose a name for the disk image. I will usually choose the date, click next. The backup process will take anywhere from 1 hr to 3 hrs depending on hard drive capacity and computer speed.

Congratulations, at this point you have made a full backup of your physical machine. Please click "Close" on the Redo and Recovery Backup program and choose the power button in the bottom right corner of your screen. Select "Shutdown" and let the computer shutdown. Remove both USB flash drive and USB HDD and boot up any computer that has Windows 7 or higher installed on it.

Now, lets turn that physical machine into a virtual machine!

Open up Virtual Box and choose "New". Give your Virtual Machine a name and choose the type of virtual machine it will be as well as the version. Choose your memory size, I usually a lot 2 GB = 2048 MB if I plan on running it on a machine that has 4 GB of ram physically installed. Create a new hard drive, choose VHD as the hard drive file type, click next. Choose "Dynamically allocated" for the storage, click next. Give your VHD hard drive a name, I will usually name it by whats running on it, hence name it what you named your computer. Make the VHD hard drive large enough to store your operating system, I will usually choose 200GB to be on the safe side. Again this depends on how big your physical machine's data was. You are now returned to the Virtual Box Manager screen with your new VM present. Make sure your Virtual Box extension has been installed. Obtain the extension for your software version and install it like so:

In Virtual Box, click File -> Preferences -> Extensions -> Add Package -> Locate extension file and select it. It will be automatically installed.

Prepare the conversion! Use only Option A or Option B:

Option A: If you can get USB support working in Virtual Box:

Make sure that you have installed the extension pack and setup USB access properly, if you are having some troubles, refer to the Virtual Box document here:

https://www.virtualbox.org/manual/ch03.html#idp55342960

In Virtual Box, click on your VM name and choose "Settings" at the top, choose "Storage". Click on the empty CD / DVD icon and then the CD / DVD icon on the right under "Attributes" and select your Redo Backup and Recovery ISO and click "OK". At this point you have the Redo Backup and Recovery.iso at the ready and a blank VHD to install to. All you need to do now is insert your USB hard drive and skip over Option B because you do not need to perform it.

Option B: If you can not get USB support to work in Virtual Box. No problem, its what happened to me so I found a way around it.

In Virtual Box, click on your VM name and choose "Settings" at the top, choose "Storage", choose "Add hard disk" next to Controller: Sata or Controller: IDE whatever you have. Choose "Create new disk", choose VHD and again make it 200GB Dynamically allocated and name it "Installer". Underneath "Storage Tree" click on the empty CD / DVD icon and then the CD / DVD icon on the right under "Attributes" and select your Redo Backup and Recovery ISO and click "OK". At this point you have the Redo Backup and Recovery.iso at the ready and a blank VHD which is named after your computer and another black VHD named Installer. Now close Virtual Box and right click on "Computer" and choose "Manage". Left click on "Disk Management" then right click on "Disk Management" again and choose "Attach VHD". Browse for the location of your Installer VHD that you created in Virtual Box, usually in the "My Documents" folder and click okay. Now you can copy the physical computer backup image that we took earlier from D: "Your Computer's Name" to Installer VHD. After the contents have been copied, right click on computer management again and click on "Detach VHD". Open up Virtual Box and proceed to the next step.

Lets Convert This Thing!

Once you have either USB support or the Installer VHD setup and the Redo Backup and Recovery ISO mounted. Press "Start" on your VM name in Virtual Box. You will be met the familiar Redo Backup and Recovery boot menu, press enter to proceed. Launch the Backup and Recovery program if it did not start automatically. Choose "Restore". In a nutshell, you will choose where your Image backup is "The Source Drive" (your USB HDD or Installer VHD if applicable) and where to install the image (blank VHD named after your computer). After you have chosen to install into the blank VHD, confirm the prompt to overite any data and let the recovery process begin. After this is finished, click close and shutdown Backup and Recovery as you did before. The VM should stop running. Click on "Settings" from the Virtual Box Manager and unmount the Backup and Recovery ISO and the Installer VHD if applicable. Leave your VHD with the name of your computer or whatever you named it and click on "OK" to go back to the Virtual Box Manager. Click on "Start", you should now be looking at a fully virtualized version of your physical computer!

Celebrate the many uses of this power little VHD!

You can transport this VHD and include it in any Virtual Box VM instance or even VMware if you are so inclined. You can run it on your local premises or deploy it in the cloud. A cloud instance of this VM would either require running Virtual Box on your cloud computing instance, or running it natively in your cloud computing space if the hosting provider supports it.

Common Gotchas and Troubleshooting:

Q: When trying to run my Linux based virtual machine, I get "not syncing: VFS: Unable to mount root fs on unknown-block (0,0)"?

A: This is because in the backup and recovery process all the entries for hda ##, hdb ## and so forth have been converted to sda ## extc. First, copy your precious VHD so you will not lose your work if something goes wrong. Then all you will have to do is mount Backup and Recovery ISO, start your VM again and bring up a terminal session. Mount the Root partition and edit the entries in GRUB or Lilo to the proper boot device. For example: in GRUB, the entries are included in menu.Ist and fstab. In Lilo they are included in /etc/lilo.config and then / sbin / lilo -v to write the changes.

Q: When trying to run my Windows based virtual machine I get a boot error?

A: Obtain a copy or a Windows disc and mount it inside of Virtual Box making sure it is set to boot first. Choose the "Repair" option. Choose "Start Up Repair" and let it run. If this does not do the trick, go back into the "Repair" option and choose "Command Prompt". Try these commands one at a time, shutting down and unmounting the Windows disc each time to check if the problem has been corrected:

bootrec.exe / FixMbr. Then restart to see if resolved. If no result, try:

bootrec.exe / FixBoot. Then restart to see if resolved. If no result, try:

bootrec.exe / RebuildBcd. Then restart to see if resolved. If no result, try:

You may have to remove your BCD folder by running these commands one line at a time without quotes:

"Bcdedit / export C: BCD_Backup

c: <—- Only if your Windows installation is installed on C:

cd boot

attrib bcd -s -h -r

ren c: bootbcd bcd.old

bootrec / RebuildBcd "

[ad_2]

Source by David T Goodwin

What Makes Unix a Unique Operating System?

[ad_1]

Unix is ​​an "ideal" operating system that has been developed by many different vendors over the past years. There are many different Unix systems that differ in functionality, external look and feel, licensing model and other non standard features, developed by these different vendors. Few examples are Linux distributions, BSD systems, Sun / Oracle Solaris or Apple OS X. However, there are number of features that are common to all Unix and Unix-like systems. Unix systems have a hierarchical file system that allows relative and absolute file path naming. These file systems can be mounted locally or remotely from file server. All operations on file systems are carried by processes, which may spawn child processes to perform discrete tasks. All processes can by identified by their unique process ID (PID).

Unix systems have core kernel which is responsible for managing core system operations, such as logical devices for input / output (/ dev / pty for example) and allocating resources to user and system services.

Originally designed as a text-processing system, Unix systems share many tools that manipulate and filter text in various ways. In addition, small utilities can be easily combined to form complete applications in rather sophisticated ways. Output from one application can be redirected to a file or another application. Combining applications with redirects allows creation of simple or more complex scripts that are capable of performing complicated and automated operations on text and files. These applications and scripts are executed from a user shell, which defines the user interface to the kernel.

Unix is ​​multiprocess, multiuser, and multi-threaded system. This means that more than one user can execute a shell and applications concurrently, and that each user can execute applications concurrently from within a single shell. Each of these applications can then create and remove lightweight processes as required. Because Unix was created by active developers, rather than operating system administrators, it is best suited to fit programmers needs.

Below are some common features to typical Unix applications following Unix principles.

  • Programs are small, self-contained, typically build to perform single task. If a new task needs to be resolved a few program is usually developed or existing programs are combined into a script.
  • Programs accept data from standard input and write to standard input, and in return they can be chained to process each others output sequentially. Programs are non interactive, instead they present a wide range of command line options, that specify performed action. These ideas are consistent with the concept of piping, which is still fundamental to the operation of user shells. For example, the output of the ls command to list all files in a directory can be "piped" using the | symbol to a program such as grep to perform pattern matching. The number of pipes on a single command-line instruction is not limited.
  • If some software does not work properly anew one is usually developed within weeks or sometimes days.

This list is not exhaustive explanation of what is the Unix system. It is rather a guide to understand what makes Unix an exceptional operating system.

[ad_2]

Source by Tim P Johnson

How to Virtualize Red Hat and CentOS Linux Physical Server Into VMware VSphere ESX Servers

[ad_1]

Converting Red Hat and CentOS Linux physical server into VMware VSphere ESX servers is not an easy task as compare to MS Windows. It is even harder is you have older version of Red Hat such as 7.2 or Linux box with software raid.

I had successfully converted a few unit of Red Hat 7.2. and software-raid Linux box for past few years. The process of P2V migration needs some extra efforts and its challenging process. You might faced all sorts of issues after P2V migration such as Kernel Panic or even worst is the Linux OS can not be booted up due to no virtual hard disk is found.

Hot Cloning using VMware tools to be used for the conversion task:

  • VMware ESXi 4.0.0 or ESX 4.0.0
  • VMware Converter Standalone 4.0.1

From experience this are the best tool to be used for P2V migration. I faced lots of issue using ESX 4.1.x and force to downgraded for the migration process. Worst case scenario if hot cloning is not working then the only method- cold cloning method using Clonzilla will help you in most of the Linux-disto cloning. Let's assume that you had perform the migration successfully completed without error. The next challenge is to get Red Hat or CentOS Linux to boot up.

  1. You have to choose the correct Disk Controller either is BusLogic Parallel or LSI Logic Parallel from Vmware Image setting. This is important process, else your Linux will never find your virtual hardisk.
  2. Boot from System Rescue and chroot to your / mnt / sysimage.
  3. Add in disk controller into /etc/modules.conf and remove any raid setting.
  4. Recompile kernel and Reinstall grub
  5. Edit the / etc / fstab and /etc/grub.conf to change all the hda to sda so that it can be mounted properly when system boot
  6. Once system booted correctly, run Kudzu to re-initialization
  7. install vmware tools

The steps above all the necessary steps for migrating physical server into VMware VSphere ESX servers. There are however, plenty of codes and extra configuration needs to be done.

[ad_2]

Source by James Edward Lee

Monitoring Var Log Messages in Linux – Monitor Your Log Files Effectively

[ad_1]

Monitoring var log messages file: Do you wish to monitor the / var / log / messages file on your Linux servers?

What exactly does it mean to monitor the / var / log / messages file on a Linux server? You see, there are various errors and incidents that many Linux users may want to watch for in their var log messages file. And while a simple tail and grep can isolate those wanted messages very quickly and easily, there often comes a time when something more sophisticated is needed. Something that is more controllable.

Say for instance there's a crisis at your job (like a server crash) and you need to quickly LOOK at the system log files for certain errors or messages that will inform you of what happened. What would you do in that situation? You're already frantic. How many tails and greps are you going to run before you go insane?

What if there's a log monitoring command you can run that will grab out the information you need based on a time-frame?

Say you had a server crash and the higher-ups at your job are breathing down your neck for answers concerning why the server went down.

In that case, you can run over to the / var / log / messages file (or any UNIX system log file) and run a command like the one below where you can choose to pull out all lines from the log file that has the strings "error" and "panic" in them, and that occurred within the past 60 minutes. The 60 minute time-frame can of course be adjusted to fit whatever time period you need to grab.

Syntax: logrobot (log-file) (minutes-to-search) (string-to-search1) (string-to-search2) (action) (warning) (critical).

Example: logrobot / var / log / messages 60 'error' 'panic' -show 5 10

This simple line of code will save you a lot of headaches and in some cases, it will also save you your job.

[ad_2]

Source by Jonathan Rayson

Using the Linux Ls Command to See Linux File "Patterns" – Linux Commands Training Quick Tips

[ad_1]

The [pattern] Part of a Linux Command

The Linux [pattern] (aka Linux shell pattern) part of a Linux command is a combination of letters and wildcard characters that are used with Linux commands to view information about Linux directories and files.

The Linux [pattern] of a Linux command does not work the same with all commands.

Linux ls Command Examples Showing Linux Command Patterns for Linux Files and Directories

The Linux commands examples that are shown below will help you to understand how a "file (or directory) matching pattern" can be used with a Linux command.

The Linux commands below will work in most Linux distributions, however, some of the Linux ls commands below may not show any Linux files in the output, depending on your Linux distribution.

The [pattern] component of a command is used to represent a file matching "pattern". It can be one or more letters, numbers or other characters and may include the * (asterisk) and? (Question mark) wildcard characters.

A [pattern] can be the name of an item (directory or file) or part of the name of an item (plus wildcard characters).

A [path] to a directory can precede a [pattern] (as shown in the second Linux command example shown below).

When a [path] is not used with a command, the command will typically display output based on the files in the current directory (as shown in the first Linux command example below).

The Linux ls command below uses the pattern of * (a single asterisk) to show all files in the current directory (and if you're working as a "regular" user and you're in your home directory, there may not be any files or directories that appear).

    $ Ls -l *

The ls command below uses the path and pattern of "/ etc / hos *" to show all files in the etc directory that begin with "hos". The [path] is / etc and the [pattern] is "hos *" (which uses the * wildcard character in the pattern).

    $ Ls -l / etc / host *

The suffix (aka filename extension, extension) in the name of an item is the far right. (Dot) and characters at the right of the. (Dot).

For example, in the directory named rc.d, the ".d" is the suffix of the directory and in the file named speedbar.gz, the ".gz" is the suffix of the file.

In the Linux ls command example below, the path and pattern is "/etc/*.cfg" and path is "/ etc" and the pattern is "* .cfg". This Linux command shows a listing of all files that end in ".cfg" in the etc directory, which is below the root directory.

    $ Ls -l /etc/*.cfg

In the ls command example below, the? wildcard character is used to represent any single character in the pattern of "host?" to show only files with a single character at the right of "host".

    $ Ls -l / etc / host?

The Linux concepts and commands discussed above apply to Red Hat, Debian, Slackware, Ubuntu, Fedora, SUSE and openSUSE Linux – and also ALL Linux distributions.

[ad_2]

Source by Clyde E. Boom

How to Choose the Right Linux VPS Plan

[ad_1]

Linux VPS plans are an excellent choice for website owners or administrators who wish to benefit from the flexibility and infrastructure of private servers, but the cost effectiveness of shared hosting. A profitable combination between shared hosting and dedicated servers, Linux VPS hosting allow users to run all popular distributions, from Ubuntu and CentOS to Fedora and Debian. Since virtual private servers have become such a popular choice these days, there are now plenty of providers available and countless possible plans for users to select from, which is why it is important to research your options properly before committing to a plan, as well as to a certain provider. Fortunately, the Internet comes as a great help, as there are numerous resources available on the subject and you can read invaluable information with regard to Linux virtual private servers online. An informed decision is the best decision, but there are also particular factors that need to be taken into account when choosing your hosting plan.

No matter how promising an ad or deal may appear, before choosing or subscribing to a Linux VPS hosting plan from any provider, you need to accurately assess your needs. If you are running a static website, then your memory needs and storage use should not be as high as if you were running, let's say, a database driven website, in which case you should also consider a higher value of premium traffic. There are also different packages and plans for WordPress blogs and Magento systems, in accordance to particular necessities of this type of websites, so if you have this kind of site look for providers that offer special packaging for it. As far as RAM memory goes, you will be able to find virtual private servers hosting plans that vary from 256MB to 8192MB, so there is no lack of alternatives. You just need, as mentioned above, to assess your requirements and needs accurately, in order to make the most suitable choice. The rraffic amount also varies significantly and you will be able to find plans that offer 250GB of premium traffic and others that give you 4TB of premium traffic.

Apart from the type of website you are running and your traffic needs, another factor that you should take into consideration when choosing the right Linux VPS hosting plan is value for money. Since there now are so many providers that offer this service, the competition between them is fierce and with a thorough research, you will be able to find a reasonable offer for what you need. Truthfully, you might come across attractive packages that provide great storage space, high amounts of premium traffic, proper CPU power and excellent RAM size, but the asking price may be way above the market value or simply above what you can afford. Take your time and compare providers and plans as well and you will certainly find a hosting plan that gives you best value for money.

[ad_2]

Source by Groshan Fabiola