Sunday, June 20, 2010

Contributing towards the new knowledge based economy: KPO



The evolution and maturity of the Indian BPO sector that has given rise to yet another wave in the global outsourcing scenario: KPO or Knowledge Process Outsourcing.

In today’s knowledge era, following the footstep of BPO, KPO emerged as the next big outsourcing sector in the market and contributing heavily to the economy. KPO stands for Knowledge Process Outsourcing. KPO includes processes that demand advanced information search, analytical, interpretation and technical skills as well as some judgment and decision-making. The KPO can be considered as one step extension of Business Processing Outsourcing (BPO) because BPO Industry is shaping into Knowledge Process Outsourcing because of its favorable advantageous and future scope. But, it’s not a 'B' replaced by a 'K'. In fact, Knowledge process can be defined as high added value processes chain where the achievement of objectives is highly dependent on the skills, domain knowledge and experience of the people carrying out the activity. And when this activity gets outsourced a new business activity emerges, which is generally known as Knowledge Process Outsourcing. One can call it a high-end activity, which is likely to boom in coming years. There is a huge potential in this field of knowledge.
The whole concept of KPO is information driven. It means that it is a continuous process of creation and dissemination of information by bringing together the information industry leaders to create knowledge and see meaning in information and its context. The KPO typically involves a component of Business Processing Outsourcing (BPO), Research Process Outsourcing (RPO) and Analysis Proves Outsourcing (APO). KPO business entities provide typical domain-based processes, advanced analytical skills and business expertise, rather than just process expertise. KPO Industry is handling more amount of high skilled work other than the BPO Industry. While KPO derives its strength from the depth of knowledge, experience and judgment factor; BPO in contrast is more about size, volume and efficiency.
Fields of work that the KPO industry focuses on include intellectual property or patent research, content development, R&D in pharmaceuticals and biotechnology, market research, equity research, data research, database creation, analytical services, financial modeling, design and development in automotive and aerospace industries, animation and simulation, medical content and services, remote education, publishing and legal support. MBAs, PhDs, engineers, doctors, lawyers and other specialists are expected to be much in demand.
Most low-level BPO jobs provide support for an organization's core competencies and entry-level prerequisites are simply a command of English (or applicable language) and basic computer skills. Knowledge process outsourcing jobs, in comparison, are typically integrated with an organization's core competencies. The jobs involve more complex tasks and may require an advanced degree and/or certification. Examples of KPO include accounting, market and legal research, Web design and content creation.
The success achieved by many overseas companies in outsourcing business process operations to India has encouraged many of the said companies to start outsourcing their high-end knowledge work as well. Cost savings, operational efficiencies, availability of and access to a highly skilled and talented workforce and improved quality are all underlying expectations in outsourcing high-end processes to India. The success in outsourcing business process operations to India has encouraged many firms to start outsourcing their high-end knowledge work as well. Cost savings, operational efficiencies, access to a highly talented workforce and improved quality are all underlying expectations in offshoring high-end processes to India.
KPO delivers high value to organizations by providing domain-based processes and business expertise rather than just process expertise. These processes demand advanced analytical and specialized skill of knowledge workers that have domain experience to their credit. Therefore outsourcing of knowledge processes face more challenges than BPO (Business Process Outsourcing). Some of the challenges involved in KPO will be maintaining higher quality standards, investment in KPO infrastructure, the lack of talent pool, requirement of higher level of control, confidentiality and enhanced risk management. Comparing these challenges with the Indian IT and ITES service providers, it is not surprising that India has been ranked the most preferred KPO outsourcing destination owing to the country's large talent pool, quality IT training, friendly government policies and low labor costs.
Even the Indian government has recognized that knowledge processes will influence economic development extensively in the future & has taken remarkable measures towards liberalization and deregulation. Recent reforms have reduced licensing requirements, made foreign technology accessible, removed restrictions on investment and made the process of investment much easier. The government has been continuously improving infrastructure with better roads, setting up technology parks, opening up telecom for enhanced connectivity, providing uninterrupted power to augment growth.
The last five years have seen vast development in Knowledge Parks, with infrastructure of global standards, in cities like Chennai, Bangalore and Gurgaon. Multitenanted ‘intelligent’ buildings, builtto- suit facilities, sprawling campuses are tailor made to suit customer requirements.
Knowledge Process Outsourcing has proven to be a blessing in disguise to efficiently increase productivity and increase cost savings in the area of market research. Organizations are adopting outsourcing to meet their market research needs and the trend is all set to take the global market research industry by storm. India's intellectual potential is the key factor for India being the favored destination for KPO industry.
A major reason why companies in India will have no option but to move up the value chain from BPO to KPO is quite simple. By 2010, India may have become too costly to provide low-end services at competitive costs. For example, Evalueserve says Indian salaries have increased at an average of 14 per cent a year. The number of professionals working in the offshore industry is expected to increase as more and more companies decide to become involved in BPO and KPO. This will further drive the trend towards the migration of low-end services to high-end services, especially as offshore service vendors (as well as professionals working in this sector) gain experience and capabilities to provide high-value services.
The future of the KPO is very bright. Surveys have shown that the global KPO industry is expected to reach nearly 17 billion dollars by the end of 2010, of which approximately 12 billion dollars worth of business will be outsourced to India. What’s more, the Indian KPO industry is expected to employ an added figure of roughly 250,000 KPO professionals by the end of 2010, as compared to the current estimated figure of 25,000 employees. Predictions have been made that India will acquire nearly 70 percent of the KPO outsourcing sector and it has a high potential as it is not restricted only to Information Technology (IT) or Information Technology Enabled Services (ITES) sectors, and includes other sectors like Intellectual Property related services, Business Research and Analytics, Legal Research, Clinical Research, Publishing, Market Research etc.

Computing with nanotechnology: Nanocomputing



Nanocomputer is the logical name for a computer smaller than the microcomputer, which is smaller than the minicomputer. More technically, it is a computer whose fundamental parts are no bigger than a few nanometers.

The world has been moving faster from mini to micro and latest is the nano technology. A nanometer is a unit of measure equal to a billionth of a meter. Ten atoms fit side by side in a nanometer. Nanotechnology today is an emerging set of tools, techniques, and unique applications involving the structure and composition of materials on a nanoscale. Nanotechnology, the art of manipulating materials on an atomic or molecular scale to build microscopic devices such as robots, which in turn will assemble individual atoms and molecules into products much as if they were Lego blocks. Nanotechnology is about building things one atom at a time, about making extraordinary devices with ordinary matter. A nanocomputer is a computer whose physical dimensions are microscopic. The field of nanocomputing is part of the emerging field of nanotechnology. Nanocomputing describes computing that uses extremely small, or nanoscale, devices (one nanometer [nm] is one billionth of a meter).
A nanocomputer is similar in many respects to the modern personal computer, but on a scale that's very much smaller. With access to several thousand (or millions) of nanocomputers, depending on users needs or requirements gives a whole new meaning to the expression "unlimited computing" users may be able to gain a lot more power for less money. Several types of nanocomputers have been suggested or proposed by researchers and futurists. Electronic nanocomputers would operate in a manner similar to the way present-day microcomputers work. The main difference is one of physical scale. More and more transistors are squeezed into silicon chips with each passing year; witness the evolution of integrated circuits (IC s) capable of ever-increasing storage capacity and processing power. The ultimate limit to the number of transistors per unit volume is imposed by the atomic structure of matter. Most engineers agree that technology has not yet come close to pushing this limit. In the electronic sense, the term nanocomputer is relative. By 1970s standards, today's ordinary microprocessors might be called nanodevices.
Chemical and biochemical nanocomputers have the power to store and process information in terms of chemical structures and interactions. Biochemical nanocomputers already exist in nature; they are manifest in all living things. But these systems are largely uncontrollable by humans. The development of a true chemical nanocomputer will likely proceed along lines similar to genetic engineering. Engineers must figure out how to get individual atoms and molecules to perform controllable calculations and data storage tasks.
Mechanical nanocomputers would use tiny moving components called nanogears to encode information. Such a machine is suggestive of Charles Babbage’s analytical engines of the 19th century. For this reason, mechanical nanocomputer technology has sparked controversy; some researchers consider it unworkable. All the problems inherent in Babbage's apparatus, according to the naysayers, are magnified a millionfold in a mechanical nanocomputer. Nevertheless, some futurists are optimistic about the technology, and have even proposed the evolution of nanorobots that could operate, or be controlled by, mechanical nanocomputers.
A quantum nanocomputer would work by storing data in the form of atomic quantum states or spin. Technology of this kind is already under development in the form of single-electron memory (SEM) and quantum dots. The energy state of an electron within an atom, represented by the electron energy level or shell, can theoretically represent one, two, four, eight, or even 16 bits of data. The main problem with this technology is instability. Instantaneous electron energy states are difficult to predict and even more difficult to control. An electron can easily fall to a lower energy state, emitting a photon; conversely, a photon striking an atom can cause one of its electrons to jump to a higher energy state.
There are several ways nanocomputers might be built, using mechanical, electronic, biochemical, or quantum technology. It is unlikely that nanocomputers will be made out of semiconductor transistors (Microelectronic components that are at the core of all modern electronic devices), as they seem to perform significantly less well when shrunk to sizes under 100 nanometers.
Computing systems implemented with nanotechnology will need to employ defect- and fault-tolerant measures to improve their reliability due to the large number of factors that may lead to imperfect device fabrication as well as the increased susceptibility to environmentally induced faults when using nanometer-scale devices. Researchers have approached this problem of reliability from many angles and this survey will discuss many promising examples, ranging from classical fault-tolerant techniques to approaches specific to nanocomputing. The research results summarized here also suggest that many useful, yet strikingly different solutions may exist for tolerating defects and faults within nanocomputing systems. Also included in the survey are a number of software tools useful for quantifying the reliability of nanocomputing systems in the presence of defects and faults.
The potential for a microscopic computer appears to be endless. Along with use in the treatment of many physical and emotional ailments, the nanocomputer is sometimes envisioned to allow for the ultimate in a portable device that can be used to access the Internet, prepare documents, research various topics, and handle mundane tasks such as email. In short, all the functions that are currently achieved with desktop computers, laptops, and hand held devices would be possible with a nanocomputer that is inserted into the body and directly interacts with the brain.
Despite the hype about nanotechnology in general and nanocomputing in particular, a number of significant barriers must be overcome before any progress can be claimed.
Work is needed in all areas associated with computer hardware and software design:
• Nanoarchitectures and infrastructure
• Communications protocols between multiple nanocomputers, networks, grids, and the Internet
• Data storage, retrieval, and access methods
• Operating systems and control mechanisms
• Application software and packages
• Security, privacy, and accuracy of data
• Circuit faults and failure management
Basically, the obstacles can be divided into two distinct areas:
Hardware: the physical composition of a nanocomputer, its architecture, its communications structure, and all the associated peripherals
Software: new software, operating systems, and utilities must be written and developed, enabling very small computers to execute in the normal environment.

Nanocomputers have the potential to revolutionize the 21st century. Increased investments in nanotechnology could lead to breakthroughs such as molecular computers. Billions of very small and very fast (but cheap) computers networked together can fundamentally change the face of modern IT computing in corporations that today are using mighty mainframes and servers. This miniaturization will also spawn a whole series of consumer-based computing products: computer clothes, smart furniture, and access to the Internet that's a thousand times faster than today's fastest technology.
Nanocomputing's best bet for success today comes from being integrated into existing products, PCs, storage, and networks—and that's exactly what's taking place.
The following list presents just a few of the potential applications of nanotechnology:
• Expansion of mass-storage electronics to huge multi-terabit memory capacity, increasing by a thousand fold the memory storage per unit. Recently, IBM's research scientists announced a technique for transforming iron and a dash of platinum into the magnetic equivalent of gold: a nanoparticle that can hold a magnetic charge for as long as 10 years. This breakthrough could radically transform the computer disk-drive industry.
• Making materials and products from the bottom up; that is, by building them from individual atoms and molecules. Bottom-up manufacturing should require fewer materials and pollute less.
• Developing materials that are 10 times stronger than steel, but a fraction of the weight, for making all kinds of land, sea, air, and space vehicles lighter and more fuel-efficient. Such nanomaterials are already being produced and integrated into products today.
• Improving the computing speed and efficiency of transistors and memory chips by factors of millions, making today's chips seem as slow as the dinosaur. Nanocomputers will eventually be very cheap and widespread. Supercomputers will be about the size of a sugar cube.

Near to online Storage: Nearline Storage

It is an oldest form of storage is any medium that is used to copy and store data from the hard drive to a source that is easily retrieved
The word "nearline" is a contraction of near-online and the term used in computer science to describe an intermediate type of data storage that represents a compromise between online storage (supporting frequent, very rapid access to data) and offline storage/archiving (used for backups or long-term storage, with infrequent access to data).
Nearline storage has many of the same features, performance, and device requirements as online storage. However, nearline storage is deployed for backup support for online storage. Demand for nearline storage is growing rapidly because more information must be archived for regulatory reasons. Nearline storage is commonly used for data backup because it can backed up large volumes of data quickly , which sometimes cannot be achieved with slower bandwidth rates to tape-based solutions. Nearline storage is built using less expensive disk drives such as SATA drives to store information that must be accessed more quickly than is possible through tape or tape libraries.
Both archiving (offline) and nearline allow a reduction of database size that results in improved speed of performance for the online system. However, accessing archived data is more complex and/or slower than is the case with nearline storage, and can also negatively affect the performance of the main database, particularly when the archive data must be reloaded into that database.
There are three major categories of near-line storage: magnetic disk, magnetic tape, and compact disc (CD). Magnetic disks include 3.5-inch diskettes, and various removable media such as the Iomega Zip disk and the Syquest disk. Tapes are available in almost limitless variety. Examples of media in the CD category are CD recordable (CD-R), CD rewriteable (CD-RW), and digital versatile disc rewriteable (DVD-RW).
Near-line storage provides inexpensive, reliable, and unlimited data backup and archiving with somewhat less accessability than with integrated online storage. For individuals and small companies, it can be an ideal solution if the user is willing to tolerate some time delay when storing or retrieving data. Near-line storage media are immune to infection by online viruses, Trojan horses, and worms because the media are physically disconnected from networks, computers, servers, and the Internet. When a near-line storage medium is being employed to recover data, it can be write-protected to prevent infection. But it is suggested that near-line storage media always be scanned with an anti-virus program before use.
The capacity and efficiency of near-line storage options has improved greatly over the years. Magnetic tape is one of the oldest formats still in use. The tape is available in formats that work with a wide range of large systems and are frequently used to create backup files for corporations on a daily basis. The tapes are easily stored and can be used to reload the most recently saved information in the event of a system failure. Magnetic tapes also function as an excellent electronic history, making it possible to research when a given bit of data was entered into the system.
The second type of near-line storage is the magnetic disk. Developed for use with personal computers, but now considered obsolete in many quarters, as well as disks developed for specific purposes such as storing a large quantity of zipped files. Since the early 21st century, most desktop and laptop computers have discontinued the installation of a magnetic disk drive, although mainframes sometimes still make use of some type of magnetic disk.
As the most recent innovation in removable storage options, the CD provides a lot of storage in a small space. The CD encompasses different formats for different file saving activity. The CDR or CD recordable makes it possible to copy a wide range of text and similar type documents. The CD rewritable or CD-RW makes it possible to easily load data onto the disk and also to load the data to another system with ease. The digital versatile disc rewritable or DVD-RW allows the copying of all types of media, including video.
One of the advantages of near-line storage is that these devices offer a means of protecting data from harm. This includes keeping the data free from viruses or bugs that may infect the hard drive at some point. While the hard drive may become corrupted and damage files loaded on the drive, data housed on near-line storage devices remains unaffected and can be used to reload the hard drive once the system is cleansed of any type of malware. Another benefit of near-line storage is the fact that this storage option is extremely inexpensive. Individuals and small companies find that utilizing these simple data storage devices provides a great deal of security and peace of mind without requiring any type of ongoing expense. Once the device is purchased and the storage of data is complete, the information can be filed in a cabinet or a drawer and restored when and as needed.
In the event that a near-line storage device is used frequently to load and unload data, it is a good idea to scan the disk or tape with some type of antivirus software before commencing the activity. There is always the slight possibility that the medium became infected when used last. Scanning and removing any viruses or other potentially damaging files will ensure the virus does not have the chance to proliferate to other systems in the network.
Near-line storage is the on-site storage of data on removable media. The removable storage idea dates back to the mainframe computer and, in the age of the smaller computer, remains popular among individuals, small businesses, and the large enterprise.

Keep the data secure with Offline storage

As the name suggests, Offline storage can be defined, where the data is physically removed from the network and cannot be accessed by the computer. It is commonly referred to as “archive” or “back up” storage and is typically a tape drive or low-end disk drive (virtual tape). Offline or disconnected storage is designed for storage of data for long periods of time, because data is archived, offline storage appliances focus on data accuracy, protection, and security. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.
Off-line storage is used to transfer information, since the detached medium can be easily physically transported. Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is accessed seldom or never, off-line storage is less expensive than tertiary storage.
Backing up to tape is very useful in terms of data created online and then as it ages or becomes less important to the company. Many companies migrates the data to less expensive storage media like tape. When the tapes contain replicas of online data, or it simply needs to be kept for archive purposes, it is often shipped offsite and offline as part of the company's data migration plan. Offline storage meant exclusively tapes on shelves or carted off to vaults. Companies like tape because it's removable, relatively cheap, and has a long shelf life for archival. But many companies would like to see those same attributes apply to disk, and now they can.
Offline storage is very practical from the point of view of data transmission medium, can also serve as a good back-up device since it is remotely located, hence it will not be affected by any disaster that might hit the direct source of data. Offline storage also provides good security for data since you can't easily access it from a computer system.

Wednesday, June 16, 2010

Use of Solid state physics in storage

SSDs have been used in enterprise storage to speed up applications and performance without the cost of adding additional servers and with its solid state memory to store data technology it replaces the traditional hard disk drive.
The name solid state drive is nowhere related to its state of being liquid or solid; actually the term "solid-state" (from solid-state physics) refers to the use of semiconductor devices rather than electron tubes. But in the present context, it’s been used to distinguish solid-state electronics from electromechanical devices. Solid state drives also enjoy greater stability over their disk counterparts because it has no moving parts. With that solid-state drives are less fragile than hard disks and are also silent (unless a cooling fan is used); as there are no mechanical delays, they usually enjoy low access time and latency. For the first time Solid-state drive (SSD) technology has been marketed to the military and niche industrial markets in the mid-1990s.
Most all electronics that we have today are made up of semiconductors and chips. In case of a SSD, it refers to the fact that the primary storage medium is through semiconductors rather than a magnetic media such as a hard drive.
The look of SSD is no different than a traditional hard drive. This design enables the SSD drive to put in a notebook or desktop computer in place of a hard drive. To do this, it needs to have the standard dimension as a 1.8, 2.5 or 3.5-inch hard drive. It also will use either the ATA or SATA drive interfaces so that there is a compatible interface.
Actually this type of storage already exists in the form of flash memory drives that plug into the USB port. And for matter of fact solid state drives and USB flash drives both use the same type of non-volatile memory chips that retain their information even when they have no power. The difference comes in the form factor and capacity of the drives. While a flash drive is designed to be external to the computer system, an SSD is designed to reside inside the computer in place of a more traditional hard drive.
An SSD is commonly composed of DRAM volatile memory or primarily NAND flash non-volatile memory. Most SSD manufacturers use non-volatile flash memory to create more rugged and compact devices for the consumer market. These flash memory-based SSDs, also known as flash drives and don’t require batteries. They are often packaged in standard disk drive form factors (1.8-, 2.5-, and 3.5-inch). One more advantage of it is, non-volatility allows flash SSDs to retain memory even during sudden power outages, ensuring data persistence. But compare to DRAM SSDs, Flash memory SSDs are slower and some designs are even slower than traditional HDDs on large files, but flash SSDs have no moving parts and thus seek times and other delays inherent in conventional electro-mechanical disks are negligible.
SSDs based on volatile memory such as DRAM have ultrafast data access, generally less than 10 microseconds, and are used primarily to accelerate applications that would otherwise be held back by the latency of Flash SSDs or traditional HDDs. DRAM-based SSDs usually incorporate either an internal battery or an external AC/DC adapter and backup storage systems to ensure data persistence while no power is being supplied to the drive from external sources. If power is lost, the battery provides power while all information is copied from random access memory (RAM) to back-up storage. When the power is restored, the information is copied back to the RAM from the back-up storage, and the SSD resumes normal operation. (Similar to the hibernate function used in modern operating systems.) These types of SSD are usually fitted with the same type of DRAM modules used in regular PCs and servers, allowing them to be swapped out and replaced with larger modules.
Flash-based Solid-state drives are very functional and can be used to create network appliances from general-purpose PC hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware. NAND Flash based SSDs offer a potential power saving; however, the typical pattern of usage of normal operations result in cache misses in the NAND Flash as well leading to continued spin of the drive platter or much longer latency if the drive needed to spin up. These devices would be slightly more energy efficient but could not prove to be any better in performance.
On the other hand, Flash-memory drives have limited lifetimes and will often wear out after 1,000,000 to 2,000,000 write cycles (1,000 to 10,000 per cell) for MLC, and up to 5,000,000 write cycles (100,000 per cell) for SLC. Special file systems or firmware designs can mitigate this problem by spreading writes over the entire device, called wear leveling. Other issue of concern is security implications. For example, encryption of existing unencrypted data on flash-based SSDs cannot be performed securely due to the fact that wear leveling causes new encrypted drive sectors to be written to a physical location different from their original location—data remains unencrypted in the original physical location. Apart from these disadvantages capacity of SSDs is currently lower than that of hard drives. However, flash SSD capacity is predicted to increase rapidly, with drives of 1 TB already released for enterprise and industrial applications. The asymmetric read vs. write performance can cause problems with certain functions where the read and write operations are expected to be completed in a similar timeframe. SSDs currently have a much slower write performance compared to their read performance.
As a result of wear leveling and write combining, the performance of SSDs degrades with use. DRAM-based SSDs (but not flash-based SSDs) require more power than hard disks, when operating; they still use power when the computer is turned off, while hard disks do not.
To overshadow these disadvantages, Solid state drives have several advantages over the magnetic hard drives. The majority of this comes from the fact that the drive does not have any moving parts. While a traditional drive has drive motors to spin up the magnetic platters and the drive heads, all the storage on a solid state drive is handled by flash memory chips. This provides distinct advantages like Less Power Usage, Faster Data Access, and Higher Reliability. Solid state drives consume very less power in portable computers. Because there is no power draw for the motors, the drive uses far less energy than the regular hard drive. Another thing is SSD has faster data access, since the drive doesn't have to spin up the drive platter or move drive heads, the data can be read from the drive near instantly. Reliability is also a key factor for portable drives. Hard drive platters are very fragile and sensitive materials. Even small jarring movements from an impact can cause the drive to be completely unreadable. Since the SSD stores all its data in memory chips, there are fewer moving parts to be damaged in any sort of impact.
As with most computer technologies, the primary limiting factor of using the solid state drives in notebook and desktop computers is cost. These drives have actually been available for some time now, but the cost of the drives is roughly the same as the entire notebook they could be installed into.
The other problem affecting the adoption of the solid state drives is capacity. Current hard drive technology can allow for over 200GB of data in a small 2.5-inch notebook hard drive. Most SSD drives announced at the 2007 CES show are of the 64GB capacity. This means that not only are the drives much more expensive than a traditional hard drive, they only hold a fraction of the data.
All of this is set to change soon though. Several companies that specialize in flash memory have announced upcoming products that look to push the capacities of the solid state drives to be closer to that of a normal hard drive but at even lower prices than the current SSDs. This will have a huge impact for notebook data storage. SSD is a rapidly developing technology and Performance of flash SSDs are difficult to benchmark. And surely in the coming years it is going to extend its reach.

King Bluetooth: The name behind the Bluetooth technology



It’s a short-range communications technology that replaces the cables connecting portable or fixed devices while maintaining high levels of security. The key features of Bluetooth technology are robustness, low power, and low cost. The Bluetooth Specification defines a uniform structure for a wide range of devices to connect and communicate with each other

Technological development has reached to a point where devices which are still connected via cables or wires are outdated. And this wireless technology has given a new term called “Bluetooth”. This term is not new to most of us, rather we often use this technology in our day-today life to transfer information and files. This technology is a short-range wireless radio technology that allows electronic devices to connect to one another. The term Bluetooth has a small story behind it that justify its not so usual name that is the developers of this wireless technology first used the name "Bluetooth" as a code name, but as time past, the name stuck. The word "Bluetooth" is actually taken from the 10th century Danish King Harald Bluetooth. King Bluetooth had been influential in uniting Scandinavian Europe during an era when the region was torn apart by wars and feuding clans. Bluetooth technology was first developed in Scandinavia and able to unite differing industries such as the cell phone, computing, and automotive markets. Bluetooth wireless technology simplifies and combines multiple forms of wireless communication into a single, secure, low-power, low-cost, globally available radio frequency.
Bluetooth technology make connections just like cables connect a computer to a keyboard, mouse, or printer, or how a wire connects an MP3 player to headphones, but does it without the cables and wires. With Bluetooth there is no more worrying about which cable goes where, while getting tangled in the mess.
Bluetooth is a packet-based protocol with a master-slave structure. Connections between Bluetooth enabled electronic devices allow these devices to communicate wirelessly through short-range, ad hoc networks known as piconets. Each device in a piconet can also simultaneously communicate with up to seven other devices within that single piconet and each device can also belong to several piconets simultaneously. This means the ways in which you can connect your Bluetooth devices is almost limitless.
Bluetooth actually is one of the secure ways to connect and exchange information between devices such as faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles.
Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 bands of 1 MHz width in the range 2402-2480 MHz. This is in the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band.
Bluetooth technology’s adaptive frequency hopping (AFH) capability was designed to reduce interference between wireless technologies sharing the 2.4 GHz spectrum. AFH works within the spectrum to take advantage of the available frequency. This is done by the technology detecting other devices in the spectrum and avoiding the frequencies they are using. This adaptive hopping among 79 frequencies at 1 MHz intervals gives a high degree of interference immunity and also allows for more efficient transmission within the spectrum.
Bluetooth range may vary depending on class of radio used in an implementation like, Class 3 radios (up to 1 meter or 3 feet), Class 2 radios, most commonly found in mobile devices (10 meters or 33 feet) and Class 1 radios, used primarily in industrial use cases (100 meters or 300 feet).
Wireless Nature
There are tons of benefits of using wireless devices. In addition to improving safety as a result of eliminating the clutter of wires and associated hazardous connections, wireless technology also offers many convenient advantages. For example, when you are traveling with your laptop, PDA, MP3 player and other devices, you no longer have to worry about bringing along all of your connecting cables.
Inexpensive
Bluetooth technology is pocket friendly for companies to implement, which results in lower over-all manufacturing Costs and the ultimate benefit goes to the consumers. The end result: Bluetooth devices are relatively inexpensive.
Automatic
The usage is very simple and one doest have to harass them in order to get it connected. When two or more Bluetooth devices enter a range (Up to 30 feet) of one another, they automatically begin to communicate without you having to do anything. Once the communicating begins, Bluetooth devices will setup Personal Area Networks or Piconets.
Standardized Protocol
Since Bluetooth is a standardized wireless specification, a high level of compatibility among devices is guaranteed. The Bluetooth specification uses and defines various profiles. Every Bluetooth profile is specific to a particular function. For instance, when a Bluetooth enabled cell phone and a Bluetooth headset (Both with the same profile) are communicating with one another, both will understand each other without the user having to do anything, even if the devices are of different models/makes.
Low Interference
Bluetooth devices avoid interference with other wireless devices by Using a technique known as spread-spectrum frequency hopping, and Using low power wireless signals.
Low Energy Consumption
Bluetooth uses low power signals. As a result, the technology requires little energy and will therefore use less battery or electrical power.
Share Voice and Data
The Bluetooth standard allows compatible devices to share both voice and data communications. For example, a Bluetooth enabled cell phone is capable of sharing voice communications with a compatible Bluetooth headset; nevertheless, the same cell phone may also be capable of establishing a GPRS connection to the Internet. Then, using Bluetooth, the phone can connect to a laptop. The result: The laptop is capable of surfing the web or sending and receiving email.
Control
Unless a device is already paired to your device, you have the option to accept or reject the connection and file transfer. This prevents unnecessary or infected files from unknown users from transferring to your device.
Instant Personal Area Network (PAN)
Up to seven compatible Bluetooth devices can connect to one another within proximity of up to 30 feet, forming a PAN or piconet. Multiple piconets can be automatically setup for a single room.
Upgradeable
The Bluetooth standard is upgradeable. A development group, Bluetooth Special Interest Group (SIG) is responsible to offers several new advantages and is backward compatible with the older versions.
Bluetooth has several positive aspects and it’s difficult to find the downside of it, but there are still sum areas that need the attention. Like other communication technologies, Bluetooth also facing the issue of privacy and identity theft. But these issues are easily combatable, and various measures are already in place to provide for the secure use of Bluetooth technology.
Bluetooth has a shortcoming when it comes to file sharing speed. Compared to the up to 4.0MBps rate transfer of infrared, Bluetooth can only reach up to 1.0MBps, meaning that it transfers files slowly. In transferring or sharing larger files at a closer distance, other wireless technology like infrared is better.
With features like handy, portable, sophisticated, easy to handle and connected through some wireless mechanism, Bluetooth can be considered as a future technology that is going to sustain in the market for long.

Friday, June 4, 2010

Green touch to the technology


Green computing is also usually referred to as Green IT. The idea is to have least human impact on the environment. Apart from this, it aims to achieve environmental sustainability through environmentally responsible use of computers and their resources.
21st century is the ear of computers, gadgets, technologies and these are fuming the energy issues. As the topics like global warming, climate change, carbon emission is getting hotter; it’s the time to “go green” not in our regular life but also in technology.

Green computing or green IT, refers to environmentally sustainable computing or IT, efficient use of resources in computing. This term generally relates to the use of computing resources in conjunction with minimizing environmental impact, maximizing economic viability and ensuring social duties. Most of us think computers are nonpolluting and consume very little energy but this is a wrong notion. It is estimated that out of $250 billion per year spent on powering computers worldwide only about 15% of that power is spent computing- the rest is wasted idling. Thus, energy saved on computer hardware and computing will equate tonnes of carbon emissions saved per year. Taking into consideration the popular use of information technology industry, it has to lead a revolution of sorts by turning green in a manner no industry has ever done before.

Green It is "the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems—such as monitors, printers, storage devices, and networking and communications systems—efficiently and effectively with minimal or no impact on the environment. It includes the dimensions of environmental sustainability, the economics of energy efficiency, and the total cost of ownership, which includes the cost of disposal and recycling."

Opportunities lie in green technology like never before in history and organizations are seeing it as a way to create new profit centres while trying to help the environmental cause. The plan towards green IT should include new electronic products and services with optimum efficiency and all possible options towards energy savings. Like recycling computing equipment can keep harmful materials such as lead, mercury, and hexavalent chromium out of landfills, and can also replace equipment that otherwise would need to be manufactured, saving further energy and emissions. power supply are helping fix this by running at 80% efficiency or better. Power management soft-wares also help the computers to sleep or hibernate when not in use. On the far horizon, reversible computing (which also includes quantum computing) promises to reduce power consumption by a factor of several thousand, but such systems are still very much in the laboratories. The best way to recycle a computer, however, is to keep it and upgrade it. Further, it is important to design computers which can be powered with low power obtained from non-conventional energy sources like solar energy, pedaling a bike, turning a hand-crank etc.

Modern IT systems rely upon a complicated mix of networks and hardware; as such, a green computing initiative must cover all of these areas as well. There are considerable economic motivations for companies to take control of their own power consumption; of the power management tools available, one of the most powerful may still be simple, plain, common sense.
Product longevity
PC manufacturing process accounts for 70 % of the natural resources used in the life cycle of a PC. "Look for product longevity, including upgradeability and modularity." For instance, manufacturing a new PC makes a far bigger ecological footprint than manufacturing a new RAM module to upgrade an existing one, a common upgrade that saves the user having to purchase a new computer.
Resource allocation
Algorithms can also be used to route data to data centers where electricity is less expensive. This approach does not actually reduce the amount of energy being used; it only reduces the cost to the company using it. However, a similar strategy could be used to direct traffic to rely on energy that is produced in a more environmentally friendly or efficient way. A similar approach has also been used to cut energy usage by routing traffic away from data centers experiencing warm weather; this allows computers to be shut down to avoid using air conditioning.
Virtualization
Computer virtualization refers to the abstraction of computer resources, such as the process of running two or more logical computer systems on one set of physical hardware
Terminal servers
Terminal servers have also been used in green computing. When using the system, users at a terminal connect to a central server; all of the actual computing is done on the server, but the end user experiences the operating system on the terminal. These can be combined with thin clients, who use up to 1/8 the amount of energy of a normal workstation, resulting in a decrease of energy costs and consumption.
Power management
The Advanced Configuration and Power Interface (ACPI), an open industry standard, allows an operating system to directly control the power-saving aspects of its underlying hardware. This allows a system to automatically turn off components such as monitors and hard drives after set periods of inactivity. In addition, a system may hibernate, where most components (including the CPU and the system RAM) are turned off.
With this green vision, the industry has been focusing on power efficiency throughout the design and manufacturing process of its products using a range of clean-computing strategies, and the industry is striving to educate markets on the benefits of green computing for the sake of the environment, as well as productivity and overall user experience. And few initiatives will change the overall structure:
Carbon-free computing
The idea is to reduce the "carbon footprint" of users — the amount of greenhouse gases produced, measured in units of carbon dioxide (CO2). PC products certified carbon free, taking responsibility for the amounts of CO2 they emit. The company’s use of silicon-on-insulator (SOI) technology in its manufacturing, and strained silicon capping films on transistors (known as “dual stress liner” technology), have contributed to reduced power consumption in its products.
Solar Computing
Solar cells fit power-efficient silicon, platform, and system technologies and enable to develop fully solar-powered devices that are nonpolluting, silent, and highly reliable. Solar cells require very little maintenance throughout their lifetime, and once initial installation costs are covered, they provide energy at virtually no cost.
Energy-efficient computing
Green-computing initiative is the development of energy-efficient platforms for low-power, small-form-factor (SFF) computing devices. In 2005, VIA company introduced the VIA C7-M and VIA C7 processors that have a maximum power consumption of 20W at 2.0GHz and an average power consumption of 1W. These energy-efficient processors produce over four times less carbon during their operation and can be efficiently embedded in solar-powered devices. Intel, the world's largest semiconductor maker, revealed eco-friendly products at a recent conference in London. The company uses virtualization software, a technique that enables Intel to combine several physical systems into a virtual machine that runs on a single, powerful base system, thus significantly reducing power consumption.
Power supply
Desktop computer power supplies (PSUs) are generally 70–75% efficient, dissipating the remaining energy as heat. An industry initiative called 80 PLUS certifies PSUs that are at least 80% efficient; typically these models are drop-in replacements for older, less efficient PSUs of the same form factor. As of July 20, 2007, all new Energy Star 4.0-certified desktop PSUs must be at least 80% efficient.
Storage
Smaller form factor (e.g. 2.5 inch) hard disk drives often consume less power per gigabyte than physically larger drives.


Green use — reducing the energy consumption of computers and other information systems as well as using them in an environmentally sound manner
Green disposal — refurbishing and reusing old computers and properly recycling unwanted computers and other electronic equipment
Green design — designing energy-efficient and environmentally sound components, computers, servers, cooling equipment, and data centers
Green manufacturing — manufacturing electronic components, computers, and other associated subsystems with minimal impact on the environment

Store it in a new way: green way


Green storage refers to a broad spectrum of solutions ranging from sheer hardware efficiency to more application-level software
IT has already made inroads into controlling energy costs associated with servers and to achieve such goals it is chanting the Green mantra everywhere and the latest is Green storage technologies such as virtualization, thin provisioning, deduplication and disk technologies that can significantly reduce the power and cooling costs in a data center Most storage technologies being marketed as green today are really technologies designed to improve storage utilization, which enables organizations to store their data on fewer disk drives and, in turn, reduce total cost of ownership (TCO), as well as power and cooling requirements.
Ever since data centers are the largest energy consumers in most organizations, they have understandably become a focal point for trying to reduce energy costs. Yet, while storage is part of the data center, it has escaped much of this intense focus so far. This is primarily due to the growing number of high-density servers in the data center during the past several years, which have naturally captured the most attention as organizations seek to resolve the most glaring power and cooling issues. Furthermore, responsibility for data center power and cooling costs has traditionally resided within the corporate facilities budget-meaning that many data center managers are neither directly responsible for, nor at times even aware of, the costs that they generate.
Storage Solutions that currently marketed as green are mainly divided into two sections, technologies that increase storage utilization, enabling users to store more data with fewer disk drives and include technologies and solutions that directly reduce power and/or cooling costs or are inherently green. Technologies that fall into the first category make the easiest business case. These are technologies that organizations should be using anyway, if available, because they directly and immediately save resources. Technologies in the second category are specifically targeted to address the power and cooling issues for storage rather than providing power and cooling benefits as a byproduct of something else. Technologies that increase storage utilization rates are considered green because they help users to store more data with fewer disk drives, an efficiency that automatically reduces power and cooling requirements.
Storage virtualization
This is the ability to present a file, volume or storage device in such a way that its physical complexity is hidden, and the application and the storage administrator see a pool of available resources instead of separate silos of dedicated storage.
Thin provisioning
Thin provisioning (TP) is a method of optimizing the efficiency with which the available space is utilized in storage area networks (SAN). TP operates by allocating disk storage space in a flexible manner among multiple users, based on the minimum space required by each user at any given time.
Thin-provisioning-aware replication
Thin-provisioning-aware replication (sometimes called thin replication) enhances remote replication capabilities so that only the allocated space to which users have written data is transmitted to the secondary site.
Data reduction techniques
Whether they are called file-level single instance store (SIS), data deduplication, data compression or redundant file elimination, the intent of data reduction techniques is to reduce the amount of capacity needed to store a given amount of information. It is especially useful in backup or archiving scenarios. Backup, for example, tends to be a particularly wasteful activity. Often, the data change rate is less than 10% of new and modified files per week. This means that the weekly full backups are sending and storing at least 90% unnecessary data, in addition to the redundancies in data throughout the week. Gartner considers data reduction a transformational technology and rates it as one of the fastest deployed storage technologies that the market has seen in more than a decade.
Boot from Storage Area Network (SAN)
This reduces the need for internal disk drives in servers (especially in rack server and blade server environments), by allowing the boot image to reside in the SAN. It also improves server reliability (no disks) and server availability (again, because there are no disks), and it helps in rapid server imaging. When coupled with thin provisioning and thin-provisioning-aware replication, multiple boot images can be stored with great space efficiency.
Quality of Service
These optimize the use of disk storage system resources by implementing user-defined policies and/or adaptive algorithms to maximize application performance while minimizing back-end storage costs. Quality of Service (QoS) storage features improved performance and throughput using a variety of techniques, including cache partitioning and binding, and input/output (I/O) prioritization. It also has the potential to improve service-level agreements (SLAs) because more data can economically be stored online. QoS storage features also provide cost savings when coupled with virtualization.
Inherently green storage technologies
While efforts to develop technologies specifically designed to reduce power consumption or cooling requirements are in their infancy from a storage hardware perspective, there are some storage solutions on the market that are inherently green. These technologies should be evaluated for their potential benefit in an organization's specific environment and considered when making purchase decisions where those benefits are judged to be high.
Massive Arrays of Idle Disks
Massive Arrays of Idle Disks (MAIDs) store data on a large group of disk drives that can be spun down when not in use. It can also be used to spin down disks (and save power and cooling costs) during non business hours in companies that do not run a 24/7 operation, or as a third tier of storage. New and future MAID implementations may incorporate intelligent power management (IPM) techniques that allow different degrees of spin-down to increase the user's options for power savings and response times. The three IPM levels include heads unloaded, heads unloaded and drive slowed to 4,000 rpm, and sleep mode/power on, where the drives stop spinning altogether. In addition to power and cooling savings, MAID and IPM approaches can also prolong the lives of disk drives. Combined with data reduction techniques, such as data deduplication, MAID storage provides a compelling green storage solution.
Small form factor disk drives
These are 2.5-inch hard drives that, in addition to increasing storage spindle density per square foot, reduce the number of voltage conversions within the system because they require less voltage than today's 3.5-inch-high rpm disk drives.
Airflow-enhanced cabinetry
A focus in the server environment for several years, this has begun to show up more in storage disk arrays. These designs do nothing to improve capacity, utilization or performance. Rather, they are designed to improve cooling, with the goal of positively affecting power and cooling issues (such as data center configuration) and costs.
Although the concept is really promising, this technology is highly complex and may require delicate tuning to achieve true energy savings. For example, difficulty in accurately predicting idle periods can result in disks spinning up soon after they were spun down, resulting in less energy conservation than anticipated. Furthermore, there may be risks that drive mechanics will become less reliable with repeated spinning up and spinning down. In the absence of conclusive evidence that proves the long-term feasibility of turning off disk drives, customers are hesitant to adopt such technologies in haste. But keep in mind that the Environmental issues are gaining serious commercial momentum and, fueled by the growing number of local and global green initiatives, they are rising ever more insistently up the corporate agenda.

The new era networking: Green Networking


There is no formal definition of “green” in networking. In simple term it’s seen as the green practices of selecting energy-efficient networking technologies and products, and minimizing resource use whenever possible. Green networking" -- the practice of consolidating devices, relying more on telecommuting and videoconferencing, and using virtualization to reduce power consumption across the network -- is an offshoot of the trend towards "greening" just about everything from cars to coffee cups. That trend has encompassed IT in general, the data center and the network.
Green Networking can be the way to help reduce carbon emissions by the Information Technology (IT) Industry. Green Networking covers all aspects of the network (personal computers, peripherals, switches, routers, and communication media). Energy efficiencies of all network components must be optimized to have a significant impact on the overall energy consumption by these components. Consequently, these efficiencies gained by having a Green Network will reduce CO2 emissions and thus will help mitigate global warming. The Life Cycle Assessment (LCA) of the components must be considered. LCA is the valuation of the environmental impacts on a product from cradle to grave. New ICT technologies must be explored and the benefits of these technologies must be assessed in terms of energy efficiencies and their associated benefits in minimizing the environmental impact of ICT. Desktop computers and monitors consume 39% of all electrical power used in ICT. In 2002, this equated to 220Mt (millions tons of CO2 emission).

To reduce power consumption and equivalent CO2 emissions from a network switch, several techniques are available. To reduce the carbon footprint of desktop PCs, their usage must be efficiently managed. Old Cathode Ray Tube monitors should be replaced with Liquid Crystal Display screens which reduce monitor energy consumption by as much as 80%. Replacing all desktop PCs with laptops would achieve a 90% decrease in power consumption. Energy can also be saved by using power saving software installed on desktops and running all the time. The power saving software controls force PCs to go into standby when not in use. Another option is to use solid state hard drives that use 50% less power than mechanical hard drives.
When considering the Local Area Network (LAN) network infrastructure, probably the most power hungry device is the network switch. PoE (Power over Ethernet) is a relative new technology introduced into modern network switches. PoE switch ports provide power for network devices as well as transmit data. PoE switch ports are used by IP phones, wireless LAN access points, and other network-attached equipment. PoE switch port can provide power to a connected device and can scale back power when not required.
Another solution is to use power management software built into the network switch. With power management software, we can instruct the network switch to turn off ports when not in use, this would equate to a saving of 15.4W × 16 hours × 365 days = 89,936 kilowatt-hours per port per year.
As networks became more critical in daily business operations, additional network services were required. Network infrastructure devices were required to support VPNs (Virtual Private Networks) and data encryption also. The new integrated network infrastructure with its network services will make the network more energy efficient and reduce the carbon footprint of the network infrastructure.
Due to the high power consumption by Data Centers, there are some proposed solutions to save energy and make Data Centers more energy efficient. Some of the solutions include; taking the Data Center to the power source instead of taking the power source to the Data Center, consolidation, virtualization, improved server and storage performances, power management, high efficiency power supplies, improved data center design.

Traditionally the electrical power needed for Data Centers is supplied by the electricity grid. Using alternate energy sources at the Data Center is often impractical. The solution is to take the Data Center to the energy source. The energy source could be solar, wind, geothermal, or some combination of these alternate forms of energy. Instead of the power traveling great distances, the data would need to travel great distances. For this to be feasible, we would require a broadband network infrastructure.
Consolidation
Going through a systematic program of consolidating and optimizing your machines and workloads can achieve increased efficiencies at the Data Center.
Virtualization
Virtualization is one of the main technologies used to implement a “Green Network”. Virtualization is a technique used to run multiple virtual machines on a single physical machine, sharing the resources of that single computer across multiple environments. Virtualization allows pooling of resources, such as computing and storage that are normally underutilized. Virtualization offers the following advantages: less power, less cooling, less facilities, and less network infrastructure. Virtualization can also be used to replace the desktop. With desktop virtualization one can use a thin client consuming little power (typically 4 Watts). The image and all other programs required by the client can be downloaded from one of the virtualization servers.
Improved Server and Storage Performances
New multicore processors execute at more than four times the speed compared to previous processors and use new high speed disk arrays with high performance. 144-gigabyte Fiber Channel drives can reduce transfer and improve efficiencies within the Data Center.
Power Management
It is estimated that Servers use up to 30% of their peak electricity consumption when they are idle. Although power management tools are available they are not necessarily being implemented. Many new CPU chips have the capacity to scale back voltage and clock frequency on a per-core basis and this can be done by reducing power supply to the memory. By implementing power management techniques, companies can save energy and cost.
High Efficiency Power Supplies
The use of high efficiency power supplies should be considered in all Data Center devices. Poor quality power supplies not only have low power efficiencies, but the power efficiency is also a function of utilization. With low utilization we achieve lower efficiency in the power supply. For every watt of electrical power wasted in a Data Center device, another watt is used in extra cooling. Therefore, investing in high efficient power supplies can double power savings. Another issue with power supply is that quite often Data Center designers overestimate power supply needs. With more accurate assessment of the power requirements of a device, we can achieve high efficiency and energy savings.
Cloud Computing
“Cloud Computing” can be considered “Green Networking” through the efficiencies gained using “Cloud Computing”. “Cloud Computing” offers the following advantages: consolidation—redundancy and waste, abstraction—decoupling workload from physical infrastructures, automation— removing manual labor from runtime operations, utility Computing—enabling service providers to offer storage and virtual servers that ICT companies can access on demand.

Green networking practices include:
Implementing virtualization
Practicing server consolidation
Upgrading older equipment for newer, more energy-efficient products
Employing systems management to increase efficiency
Substituting telecommuting, remote administration and videoconferencing for travel
High Efficiency Power Supplies
Improved Data Center Design

The apparition of a Green Network is one where we all will be connected via wireless to the Internet, using low energy consumption, where all our data is securely stored in highly efficient, reliable Data Centers typically running at low energy per Gigabit per second speed. This can also include access to network services from Cloud computing service providers. Whatever the future is, Green Networking will help reduce the carbon footprint of the IT industry and hopefully lead the way in a cultural shift that all of us need to make if we are to reverse the global warming caused by human emissions of greenhouse gases. Finally, the issue of Efficiency versus Consumption is an interesting argument, that is, efficiency drives consumption. IT solutions can solve efficiency; it is society that must solve consumption.

An arduous profession: Network administrator


Network administrators are teasingly referred to as the highest level of techie you get before you get turned into a fatty belly boss and made into management.
Network administration is a grueling profession where the person is accountable for the maintenance of computer hardware and software that comprises a computer network. This normally includes the deployment, configuration, maintenance and monitoring of active network equipment. A related role is that of the network specialist, or network analyst, who concentrates on network design and security. And the responsibility doesn’t limit only to this but often involve many different aspects and may include such tasks as network design, management, troubleshooting, backup and storage, documentation, security and virus prevention as well as managing users.
The actual role of the Network Administrator will vary from company to company, but will commonly include activities and tasks such as network address assignment, assignment of routing protocols and routing table configuration as well as configuration of authentication and authorization – directory services. It often includes maintenance of network facilities in individual machines, such as drivers and settings of personal computers as well as printers and such. It sometimes also includes maintenance of certain network servers: file servers, VPN gateways, intrusion detection systems, etc. but the common responsibilities include:
Oversee administration of networks
Designs, manages and maintains LAN network server, IBM AS/400 application and data servers, SQL server, state interface server, E911 phone company interface server and remote access devices; Develops and monitors system security procedures to protect the system from physical harm, viruses, unauthorized users, and data damage; Conducts the installation, configuration and maintenance of servers, network hardware and software; Establishes and maintains network user profiles, user environment, directories, and security.
Provide system support
Implements and maintains connectivity standards allowing PCs to communicate with network and server applications; maintains a technical inventory of current configuration of all servers, PCs, shares, printers and software installations; prepares and maintains accurate and detailed problem/resolution records. Tracks frequency and nature of problems; assists the System Analyst with second-level user support when necessary.
3. Identify and recommend computer system needs
Conduct product evaluations of upgraded or new hardware and software identifying strengths, weaknesses, and potential benefits; assist with on-going statistical analysis of system load, to determine optimal operating efficiencies and assist in capacity planning algorithms.
Performs additional duties as needed
Provides assistance to management and users regarding NIBRS and NCIC connectivity as applied in the application software; assesses user accounts, upgrades, removes and configures network printing devices, directory structures, rights, network security and software on file servers; performs network troubleshooting to isolate and diagnose problems while maintaining minimal system outages.
To become a successful network administrator one should have a Bachelors degree in Computer Science and thorough knowledge of server level operations and software principles utilizing Windows NT 4.0 to 2003 and Linux/Unix operating systems; good knowledge and experience with LAN systems and hardware such as Cisco and HP, including experience with managed switches and VLAN capability preferred; knowledge of local area networks and wide area networks, including experience with networking essentials such as DNS, DHCP, NAT, WINS, packet filtering and advanced routing; in-depth knowledge of application packages must include Antivirus, backup routines, network sharing, group e-mail suites and open source software; knowledge of current network and computer system security practices; ability to install and maintain a variety of operating systems as well as other related hardware and software; ability to clearly and concisely communicate technical information to non-technical users at all organizational levels; ability to accurately prepare and maintain various records, reports, correspondence and other departmental documents; ability to establish and maintain effective working relationships and exercise tact when dealing with governmental officials, outside agencies, co-workers and supervisors.

Sunday, May 30, 2010

Wire free networking- The Wi-Fi world




As the name indicates, Wireless Networking means no cables or wires required to network your computers and share your Internet connection. Wi-Fi connects computers, printers, video camera's and game consoles into a fast Ethernet network via microwaves.
A wireless LAN is the perfect way to improve data connectivity in an existing building without the expense of installing a structured cabling scheme to every desk. Besides the freedom that wireless computing affords users, ease of connection is a further benefit. Problems with the physical aspects of wired LAN connections (locating live data outlets, loose patch cords, broken connectors, etc.) generate a significant volume of helpdesk calls. With a wireless network, the incidence of these problems is reduced.
A range of wireless network technologies have or will soon reach the general business market, wireless LANs based on the 802.11 standard are the most likely candidate to become widely prevalent in corporate environments. Current 802.11b products operate at 2.4GHz, and deliver up to 11Mbps of bandwidth – comparable to a standard Ethernet wired LAN in performance. An upcoming version called 802.11a moves to a higher frequency range, and promises significantly faster speeds. It is expected to have security concerns similar to 802.11b.This low cost, combined with strong performance and ease of deployment, mean that many departments and individuals already use 802.11b, at home or at work – even if IT staff and security management administrators do not yet recognize wireless LANs as an approved technology. Without doubt, wireless LANs have a high gee-whiz factor. They provide always-on network connectivity, but don’t require a network cable. Office workers can roam from meeting to meeting throughout a building, constantly connected to the same network resources enjoyed by wired, desk-bound coworkers. Home or remote workers can set up networks without worrying about how to run wires through houses that never were designed to support network infrastructure. Wireless LANS may actually prove less expensive to support than traditional networks for employees that need to connect to corporate resources in multiple office locations. Large hotel chains, airlines, convention centers, Internet cafes, etc., see wireless LANs as an additional revenue opportunity for providing Internet connectivity to their customers. Wireless is a more affordable and logistically acceptable alternative to wired LANs for these organizations. For example, an airline can provide for-fee wireless network access for travelers in frequent flyer lounges – or anywhere else in the airport. Market maturity and technology advances will lower the cost and accelerate widespread adoption of wireless LANs. End-user spending, the primary cost metric, will drop from about $250 in 2001 to around $180 in 2004 (Gartner Group). By 2005, 50 percent of Fortune 1000 companies will have extensively deployed wireless LAN technology based on evolved 802.11 standards (0.7 probability). By 2010, the majority of Fortune 2000 companies will have deployed wireless LANs to support standard, wired network technology LANs (0.6 probability).
For the anticipated future wireless technology will complement wired connectivity in enterprise environments. Even new buildings will continue to incorporate wired LANs. The primary reason is that wired networking remains less expensive than wireless. In addition, wired networks offer greater bandwidth, allowing for future applications beyond the capabilities of today’s wireless systems. Although it may cost 10 times more to retrofit a building for wired networking (initial construction being by far the preferred time to set up network infrastructure), wiring is only a very small fraction of the cost of the overall capital outlay for an enterprise network. For that reason, many corporations are only just testing wireless technology. This limited acceptance at the corporate level means few access points with a limited number of users in real world production environments, or evaluation test beds sequestered in a lab. In response, business units and individuals will deploy wireless access points on their own. These unauthorized networks almost certainly lack adequate attention to information security, and present a serious concern for protecting online business assets.
Finally, the 802.11b standard shares unlicensed frequencies with other devices, including
Bluetooth wireless personal area networks (PANs), cordless phones, and baby monitors. These technologies can, and do, interfere with each other. 802.11b also fails to delineate roaming
802.11b’s low cost of entry is what makes it so attractive. However, inexpensive equipment also makes it easier for attackers to mount an attack. “Rogue” access points and unauthorized, poorly secured networks compound the odds of a security breach.
Although attacks against 802.11b and other wireless technologies will undoubtedly increase in number and sophistication over time, most current 802.11b risks fall into seven basic categories like, Insertion attacks, Interception, unauthorized monitoring of wireless traffic and Jamming.
With all its advantages, the major issue related to it is security, anyone within the geographical network range of an open, unencrypted wireless network can 'sniff' or record the traffic, gain unauthorized access to internal network resources as well as to the internet, and then possibly sending spam or doing other illegal actions using the wireless network's IP address, all of which are rare for home routers but may be significant concerns for office networks. There are three principal ways to secure a wireless network.
For closed networks (like home users and organizations) the most common way is to configure access restrictions in the access points. Those restrictions may include encryption and checks on MAC address. Another option is to disable ESSID broadcasting, making the access point difficult for outsiders to detect. Wireless Intrusion Prevention Systems can be used to provide wireless LAN security in this network model.
For commercial providers, hotspots, and large organizations, the preferred solution is often to have an open and unencrypted, but completely isolated wireless network. The users will at first have no access to the Internet nor to any local network resources. Commercial providers usually forward all web traffic to a captive portal which provides for payment and/or authorization. Another solution is to require the users to connect securely to a privileged network using VPN.
Wireless networks are less secure than wired ones; in many offices intruders can easily visit and hook up their own computer to the wired network without problems, gaining access to the network, and it's also often possible for remote intruders to gain access to the network through backdoors like Back Orifice. One general solution may be end-to-end encryption, with independent authentication on all resources that shouldn't be available to the public.


Wireless LAN security has a long way to go. Current Implementation of WEP has proved to be flawed. Further initiatives to come up with a standard that is robust and provides adequate security are urgently needed. The 802.1x and EAP are just mid points in a long journey. Till new security standard for WLAN comes up third party and proprietary methods need to be implemented.

DAS: Perfect for Local Data Sharing




DAS is a type of storage that is connected directly to the server which enables quick access to the data but only through the server.

A network storage system helps organize and save critical information created on a computer in an efficient and accessible manner. Direct Attached Storage is an extremely versatile dedicated solution that addresses many storage problems. Its most common uses are server expansion and low-cost clustering. Direct-attached storage, or DAS, is the most basic level of storage, in which storage devices are part of the host computer, as with drives, or directly connected to a single server, as with RAID arrays or tape libraries. Network workstations must therefore access the server in order to connect to the storage device. This is in contrast to networked storage such as NAS and SAN, which are connected to workstations and servers over a network. As the first widely popular storage model, DAS products still comprise a large majority of the installed base of storage systems in today's IT infrastructures.
Although the competition of networked storage is growing at a faster rate than ever but direct-attached storage, is still a viable option by virtue of being simple to deploy and having a lower initial cost when compared to networked storage. In order for clients on the network to access the storage device in the DAS model, they must be able to access the server it is connected to. If the server is down or experiencing problems, it will have a direct impact on users' ability to store and access data. In addition to storing and retrieving files, the server also bears the load of processing applications such as e-mail and databases. Network bottlenecks and slowdowns in data availability may occur as server bandwidth is consumed by applications, especially if there is a lot of data being shared from workstation to workstation.
DAS is ideal for small businesses or departments and workgroups that do not need to share information over long distances or across an enterprise and localize file sharing in environments with a single server or a few servers. Small companies traditionally utilize DAS for file serving and e-mail, while larger enterprises may leverage DAS in a mixed storage environment that likely includes NAS and SAN. DAS also offers ease of management and administration in this scenario, since it can be managed using the network operating system of the attached server. However, management complexity can escalate quickly with the addition of new servers, since storage for each server must be administered separately.
From an economical perspective DAS is a cost-effective storage solution for small enterprises though limited in its scalability. It is ideal for setups that rely on localized file sharing and there is no need to transfer files over long distances. Enterprises that begin with DAS but later shift to networked solutions can use DAS to store less critical data. A single enclosure DAS offers some advantages – these include an easy to manage connection that can be managed with minimal skills. This is because the cabling is an integral part of the cabinet with the server. DAS is a general-purpose solution for all types of storage processing.
Organizations that do eventually transition to networked storage can protect their investment in legacy DAS. One option is to place it on the network via bridge devices, which allows current storage resources to be used in a networked infrastructure without incurring the immediate costs of networked storage. Once the transition is made, DAS can still be used locally to store less critical data.
With so many plus points, it has some drawbacks like, a single enclosure DAS design include poor scalability and limited disk capacity. This means that DAS cannot be used as the only storage medium for an enterprise environment. Poor scalability adds to the complexities in managing the storage environment. DAS does not allow for good management practices where a single data repository image is maintained. DAS does not provide the uptime or security that is associated with a SAN or NAS configuration. Disk consolidation with DAS is not feasible.
A multiple external enclosure DAS design offers the advantage of speedier recovery in case complete server hardware takes place. Storage capacity is in terabytes and greater than the internal capacity of a computer. On the flip side, a multiple external enclosure DAS adds to the complexity of management; it is more expensive than an internal solution and has greater space requirements. When setting up DAS, the following aspects regarding hard disks should be taken into consideration – disk capacity, disk I/O, and hard disk connectivity.
Large-scale DAS deployments can be a little difficult to secure because of the distributed nature of the servers. DAS security includes server security policies and access limitations to the server – both physical and over a network. DAS hosted on Windows servers can be made secure by using group policies. DAS scores well on the manageability front so long as scalability is not an issue. Backup and recovery of DAS storage can be done over LAN; but this adds to the LAN traffic and can slow down applications. A solution is to add another network to be used solely for backup and recovery but such a solution adds to the management complexity and may not be adequate for very large databases.
With DAS, redundancy is provided at the disk or controller level because with locally attached storage the fault tolerance is taken care of by localized DAS technologies. System-level redundancies cost more and in the event of a server problem the attached storage may be unavailable to users. In order to improve data accessibility the Windows Cluster service can be deployed to provide redundant hosts that share the storage subsystem. RAID configurations also add to the redundancy.
In terms of performance DAS storage delivers well because the processor and disk are situated close to each other. Any effort to scale DAS can result in performance levels falling because the storage and applications share the same set of resources. Unlike NAS and SAN which use dedicated resources for storage processing, DAS affects the LAN passage because of storage-related traffic.
Like all industries, storage networking is in a constant state of change. It's easy to fall into the trap of choosing the emerging or disruptive storage technology at the time. But the best chance for success comes with choosing a solution that is cost-correct and provides long term investment protection for your organization. Digital assets will only continue to grow in the future. Make sure your storage infrastructure is conducive to cost-effective expansion and scalability. It is also important to implement technologies that are based on open industry standards, which will minimize interoperability concerns as you expand your network.

A DAS is a dedicated storage device that's added to your environment. It's an ideal solution for applications requiring a lower-cost, entry-level cluster to maintain availability. And if a person is simply looking for an economical way to expand storage, than DAS is a smart alternative.

Scalable On-demand: highly scalable, whether to add
146GB or 10TB, disks can be added as per need them
Flexible: multiple configuration options for a variety of storage needs including transactional databases, media downloads and archiving
Dedicated: dedicated solution ensures only one accessing data on drives and can help satisfy requirements for certain compliance programs
Easy: adding a DAS is easy on budget and eliminates the complexities of growing storage by adding another server to your configuration.

The Magic World of 4G



A strong need exists to combine both the wireless (LAN) concept and cell or base station wide area network design. 4G is seen as the solution that will bridge that gap and thereby provide a much more robust network.
Technology is versatile and changing speedily with time. Following the evolutionary line of cell phone technology standards that has spanned from 1G, 2G, 2.5G to 3G, 4G describes the entirely brave new world beyond advanced 3G networks.
4G, which is also known as “beyond 3G” or “fourth-generation” cell phone technology, refers to the entirely new evolution and a complete 3G replacement in wireless communications. A successor to 2G and 3G aiming to provide the very high data transfer rates. This technology can provide very speedy wireless internet access to not only stationary users but also to the mobile users. This technology is expected to trounce the deficiencies of 3G technology in terms of speed and quality. 4G can be best describe as a term that stands for Mobile multimedia Anytime Anywhere Global mobility support, integrated wireless and personalized services.
But at this time nobody exactly knows the true definition for 4G technology. However it has been used often to denote a fast internet access available to mobile phone users. More over the distinguishing feature of high multimedia streaming and end to end IP configuration is judged to be its MAGIC enchantment. 3G has WiMax and WiFi as separate wireless technologies, whereas 4G is expected to combine these two technologies. The efficiency of 4G can be easily estimated, by the way it would coalesce two extremely reliable technologies. 4G can greatly anticipate in evolving and advancing the pervasive computing. The aim of pervasive computing is to attach itself to every living space possible, so that human beings remain intact with the wireless technology intentionally and unintentionally. Therefore 4G is be able to connect various high speed networks together, which would enable each one of us to carry digital devices even in dispersed locations. The network operators worldwide would be able to deploy wireless mesh networks and make use of cognitive radio technology for widespread coverage and access. Someday 4G networks may replace all existing 2.5G and 3G networks, perhaps even before a full deployment of 3G. Multiple 3G standards are springing up that would make it difficult for 3G devices to be truly global. A strong need exists to combine both the wireless (LAN) concept and cell or base station wide area network design. 4G is seen as the solution that will bridge that gap and thereby provide a much more robust network.
With these advantages there are some major challenges in realising the 4G vision. The first major concern is power consumption. This is getting critical with adding up multiple processing and communication elements to drive higher levels of MIPS (throughput) in mobile devices. All of these elements will increase current drain. Additional hardware acceleration technology is going to be required to manage power in this kind of environment and, with the emergence and use of OFDM-based technology as crucial to managing some of the process streams and power challenges in these kinds of applications and devices.
The second challenge is spectral efficiency, which is largely a matter of availability. In order for more spectrum to be made available, the option is either re-farm existing spectrum in 2G and analogue broadcast TV or open up higher-frequency bandwidths. Further improvements in spectral efficiency can be derived from the use of cognitive radio. Dramatic innovations will be required to deliver on that promise.
Third significant challenge is cost, related to infrastructure, operating or handset cost, it is also include the cost of deploying services. There are a variety of challenges in this area that come along with the network topology required for a 4G system.
First of all, to deliver the spectral efficiency and coverage required, we will have to see a dramatic growth in the number of basestations. To support the kinds of services that consumers increasingly expect, we will need as much as three times more basestations to deliver a ten-fold increase in data rate.One way to reduce basestation density is by applying advanced antenna techniques such as MIMO and space-time coding (STC). These techniques can improve spectral efficiency to reduce the number and growth rate of basestations. They can do this and still achieve the kind of coverage required to deliver the bandwidth necessary for the applications consumers want.
There are capital costs associated with growth in the number of basestations required to deliver coverage at high data rates. On the handset side, there are significant challenges in continuing to drive down the cost of integrating greater and greater processing capability in multimode RF technology. From a carrier perspective, the affordability of managing, billing and distributing content over these networks to drive revenue to recover those higher operating costs is another challenge in realising a 4G vision.
Everybody is still wondering what the 3G application is, and people are already getting into 4G technologies, mobile media players, internet access, broadcast technology and other types of corporate aggregations will become more robust and will drive average revenue per user (ARPU) in the carrier space.
Adding on to this is Miniaturisation challenges that include power reduction, cost, size and product development cycle. Multimode technology in 4G means we have to be able to hand off the different types of radio access technologies in a seamless way. There are significant software, billing, carrier interoperability and enterprise carrier interoperability challenges. On the multimedia side, it is obvious that with rich digital media content come dramatic processing challenges for mobile devices.
As its obvious that 4G is not going to be driven by a single entity or organisation. It will require a tremendous number of partnerships and a robust ecosystem, so exploitation of the capabilities that are available in wireless technologies is certain. Given the sweeping changes in the world of technology, it is going to require multiple standards bodies, corporations and government entities to come together to drive standards-based interoperability and the opportunity to deliver 4G networks. Governments will have to manage the spectrum in different parts of the world, and this will have a dramatic impact on how we can exploit the capabilities available to us in wireless technologies.
Traditional equipment vendors have historically operated at layers 1–3. Wireline internet access is increasingly being challenged to improve security. Security has multiple elements, much more than just moving encrypted traffic at faster and faster rates across the network. Security is also about denial of service attacks and digital rights management. These are all becoming carrier problems.

4G is a multi purpose and versatile technology hence it can utilize almost all of the packet switched technologies. It can use both orthogonal frequency division multiplexing (OFDM) and orthogonal frequency division multiple access (OFDMA). OFDM mechanism splits a digital signal into different narrowband and frequencies. 4G is also capable of using multiple input / multiple output technology (MIMO).this antenna technology is used to optimize the data speed and reduce the errors in the networks.
The flexibility of 4G technologies to be used in combination with GSM and CDMA has provided it an edge over other technologies. The reason is that the high broadband capability of 4G not only increases data streaming for stationary users but also for mobile users.4G can be efficiently combined with cellular technologies to make consistent use of smart phones. The digital cameras attached in smart phones can be used to establish video blogs in scattered geographical regions. This gives the manufactures the opportunity to produce more affordable user friendly 4G compatible devices. Famous iPod is one such device that supports the working of video blogs. Hence 4G is capable of providing new horizon of opportunity for both existing and startup telephone companies.

4G delivers true mobile broadband for the masses with a superior user experience. Nortel is boosting the adoption of mobile multimedia and the delivery of a true mobile broadband experience through our leadership in 4G-enabled technologies - LTE (Long Term Evolution) and IMS (IP Multimedia Subsystem).4G mobile broadband provides improved performance, lower total cost of ownership and enables a new era of personalized services. 4G networks are IP-based and flatter with fewer nodes to manage. The benefits are significant and can make 4G mobile broadband a truly disruptive technology providing service providers a cost-effective way to deploy next generation technology and services and redefining the end-user experience.