Sunday, June 20, 2010

Contributing towards the new knowledge based economy: KPO



The evolution and maturity of the Indian BPO sector that has given rise to yet another wave in the global outsourcing scenario: KPO or Knowledge Process Outsourcing.

In today’s knowledge era, following the footstep of BPO, KPO emerged as the next big outsourcing sector in the market and contributing heavily to the economy. KPO stands for Knowledge Process Outsourcing. KPO includes processes that demand advanced information search, analytical, interpretation and technical skills as well as some judgment and decision-making. The KPO can be considered as one step extension of Business Processing Outsourcing (BPO) because BPO Industry is shaping into Knowledge Process Outsourcing because of its favorable advantageous and future scope. But, it’s not a 'B' replaced by a 'K'. In fact, Knowledge process can be defined as high added value processes chain where the achievement of objectives is highly dependent on the skills, domain knowledge and experience of the people carrying out the activity. And when this activity gets outsourced a new business activity emerges, which is generally known as Knowledge Process Outsourcing. One can call it a high-end activity, which is likely to boom in coming years. There is a huge potential in this field of knowledge.
The whole concept of KPO is information driven. It means that it is a continuous process of creation and dissemination of information by bringing together the information industry leaders to create knowledge and see meaning in information and its context. The KPO typically involves a component of Business Processing Outsourcing (BPO), Research Process Outsourcing (RPO) and Analysis Proves Outsourcing (APO). KPO business entities provide typical domain-based processes, advanced analytical skills and business expertise, rather than just process expertise. KPO Industry is handling more amount of high skilled work other than the BPO Industry. While KPO derives its strength from the depth of knowledge, experience and judgment factor; BPO in contrast is more about size, volume and efficiency.
Fields of work that the KPO industry focuses on include intellectual property or patent research, content development, R&D in pharmaceuticals and biotechnology, market research, equity research, data research, database creation, analytical services, financial modeling, design and development in automotive and aerospace industries, animation and simulation, medical content and services, remote education, publishing and legal support. MBAs, PhDs, engineers, doctors, lawyers and other specialists are expected to be much in demand.
Most low-level BPO jobs provide support for an organization's core competencies and entry-level prerequisites are simply a command of English (or applicable language) and basic computer skills. Knowledge process outsourcing jobs, in comparison, are typically integrated with an organization's core competencies. The jobs involve more complex tasks and may require an advanced degree and/or certification. Examples of KPO include accounting, market and legal research, Web design and content creation.
The success achieved by many overseas companies in outsourcing business process operations to India has encouraged many of the said companies to start outsourcing their high-end knowledge work as well. Cost savings, operational efficiencies, availability of and access to a highly skilled and talented workforce and improved quality are all underlying expectations in outsourcing high-end processes to India. The success in outsourcing business process operations to India has encouraged many firms to start outsourcing their high-end knowledge work as well. Cost savings, operational efficiencies, access to a highly talented workforce and improved quality are all underlying expectations in offshoring high-end processes to India.
KPO delivers high value to organizations by providing domain-based processes and business expertise rather than just process expertise. These processes demand advanced analytical and specialized skill of knowledge workers that have domain experience to their credit. Therefore outsourcing of knowledge processes face more challenges than BPO (Business Process Outsourcing). Some of the challenges involved in KPO will be maintaining higher quality standards, investment in KPO infrastructure, the lack of talent pool, requirement of higher level of control, confidentiality and enhanced risk management. Comparing these challenges with the Indian IT and ITES service providers, it is not surprising that India has been ranked the most preferred KPO outsourcing destination owing to the country's large talent pool, quality IT training, friendly government policies and low labor costs.
Even the Indian government has recognized that knowledge processes will influence economic development extensively in the future & has taken remarkable measures towards liberalization and deregulation. Recent reforms have reduced licensing requirements, made foreign technology accessible, removed restrictions on investment and made the process of investment much easier. The government has been continuously improving infrastructure with better roads, setting up technology parks, opening up telecom for enhanced connectivity, providing uninterrupted power to augment growth.
The last five years have seen vast development in Knowledge Parks, with infrastructure of global standards, in cities like Chennai, Bangalore and Gurgaon. Multitenanted ‘intelligent’ buildings, builtto- suit facilities, sprawling campuses are tailor made to suit customer requirements.
Knowledge Process Outsourcing has proven to be a blessing in disguise to efficiently increase productivity and increase cost savings in the area of market research. Organizations are adopting outsourcing to meet their market research needs and the trend is all set to take the global market research industry by storm. India's intellectual potential is the key factor for India being the favored destination for KPO industry.
A major reason why companies in India will have no option but to move up the value chain from BPO to KPO is quite simple. By 2010, India may have become too costly to provide low-end services at competitive costs. For example, Evalueserve says Indian salaries have increased at an average of 14 per cent a year. The number of professionals working in the offshore industry is expected to increase as more and more companies decide to become involved in BPO and KPO. This will further drive the trend towards the migration of low-end services to high-end services, especially as offshore service vendors (as well as professionals working in this sector) gain experience and capabilities to provide high-value services.
The future of the KPO is very bright. Surveys have shown that the global KPO industry is expected to reach nearly 17 billion dollars by the end of 2010, of which approximately 12 billion dollars worth of business will be outsourced to India. What’s more, the Indian KPO industry is expected to employ an added figure of roughly 250,000 KPO professionals by the end of 2010, as compared to the current estimated figure of 25,000 employees. Predictions have been made that India will acquire nearly 70 percent of the KPO outsourcing sector and it has a high potential as it is not restricted only to Information Technology (IT) or Information Technology Enabled Services (ITES) sectors, and includes other sectors like Intellectual Property related services, Business Research and Analytics, Legal Research, Clinical Research, Publishing, Market Research etc.

Computing with nanotechnology: Nanocomputing



Nanocomputer is the logical name for a computer smaller than the microcomputer, which is smaller than the minicomputer. More technically, it is a computer whose fundamental parts are no bigger than a few nanometers.

The world has been moving faster from mini to micro and latest is the nano technology. A nanometer is a unit of measure equal to a billionth of a meter. Ten atoms fit side by side in a nanometer. Nanotechnology today is an emerging set of tools, techniques, and unique applications involving the structure and composition of materials on a nanoscale. Nanotechnology, the art of manipulating materials on an atomic or molecular scale to build microscopic devices such as robots, which in turn will assemble individual atoms and molecules into products much as if they were Lego blocks. Nanotechnology is about building things one atom at a time, about making extraordinary devices with ordinary matter. A nanocomputer is a computer whose physical dimensions are microscopic. The field of nanocomputing is part of the emerging field of nanotechnology. Nanocomputing describes computing that uses extremely small, or nanoscale, devices (one nanometer [nm] is one billionth of a meter).
A nanocomputer is similar in many respects to the modern personal computer, but on a scale that's very much smaller. With access to several thousand (or millions) of nanocomputers, depending on users needs or requirements gives a whole new meaning to the expression "unlimited computing" users may be able to gain a lot more power for less money. Several types of nanocomputers have been suggested or proposed by researchers and futurists. Electronic nanocomputers would operate in a manner similar to the way present-day microcomputers work. The main difference is one of physical scale. More and more transistors are squeezed into silicon chips with each passing year; witness the evolution of integrated circuits (IC s) capable of ever-increasing storage capacity and processing power. The ultimate limit to the number of transistors per unit volume is imposed by the atomic structure of matter. Most engineers agree that technology has not yet come close to pushing this limit. In the electronic sense, the term nanocomputer is relative. By 1970s standards, today's ordinary microprocessors might be called nanodevices.
Chemical and biochemical nanocomputers have the power to store and process information in terms of chemical structures and interactions. Biochemical nanocomputers already exist in nature; they are manifest in all living things. But these systems are largely uncontrollable by humans. The development of a true chemical nanocomputer will likely proceed along lines similar to genetic engineering. Engineers must figure out how to get individual atoms and molecules to perform controllable calculations and data storage tasks.
Mechanical nanocomputers would use tiny moving components called nanogears to encode information. Such a machine is suggestive of Charles Babbage’s analytical engines of the 19th century. For this reason, mechanical nanocomputer technology has sparked controversy; some researchers consider it unworkable. All the problems inherent in Babbage's apparatus, according to the naysayers, are magnified a millionfold in a mechanical nanocomputer. Nevertheless, some futurists are optimistic about the technology, and have even proposed the evolution of nanorobots that could operate, or be controlled by, mechanical nanocomputers.
A quantum nanocomputer would work by storing data in the form of atomic quantum states or spin. Technology of this kind is already under development in the form of single-electron memory (SEM) and quantum dots. The energy state of an electron within an atom, represented by the electron energy level or shell, can theoretically represent one, two, four, eight, or even 16 bits of data. The main problem with this technology is instability. Instantaneous electron energy states are difficult to predict and even more difficult to control. An electron can easily fall to a lower energy state, emitting a photon; conversely, a photon striking an atom can cause one of its electrons to jump to a higher energy state.
There are several ways nanocomputers might be built, using mechanical, electronic, biochemical, or quantum technology. It is unlikely that nanocomputers will be made out of semiconductor transistors (Microelectronic components that are at the core of all modern electronic devices), as they seem to perform significantly less well when shrunk to sizes under 100 nanometers.
Computing systems implemented with nanotechnology will need to employ defect- and fault-tolerant measures to improve their reliability due to the large number of factors that may lead to imperfect device fabrication as well as the increased susceptibility to environmentally induced faults when using nanometer-scale devices. Researchers have approached this problem of reliability from many angles and this survey will discuss many promising examples, ranging from classical fault-tolerant techniques to approaches specific to nanocomputing. The research results summarized here also suggest that many useful, yet strikingly different solutions may exist for tolerating defects and faults within nanocomputing systems. Also included in the survey are a number of software tools useful for quantifying the reliability of nanocomputing systems in the presence of defects and faults.
The potential for a microscopic computer appears to be endless. Along with use in the treatment of many physical and emotional ailments, the nanocomputer is sometimes envisioned to allow for the ultimate in a portable device that can be used to access the Internet, prepare documents, research various topics, and handle mundane tasks such as email. In short, all the functions that are currently achieved with desktop computers, laptops, and hand held devices would be possible with a nanocomputer that is inserted into the body and directly interacts with the brain.
Despite the hype about nanotechnology in general and nanocomputing in particular, a number of significant barriers must be overcome before any progress can be claimed.
Work is needed in all areas associated with computer hardware and software design:
• Nanoarchitectures and infrastructure
• Communications protocols between multiple nanocomputers, networks, grids, and the Internet
• Data storage, retrieval, and access methods
• Operating systems and control mechanisms
• Application software and packages
• Security, privacy, and accuracy of data
• Circuit faults and failure management
Basically, the obstacles can be divided into two distinct areas:
Hardware: the physical composition of a nanocomputer, its architecture, its communications structure, and all the associated peripherals
Software: new software, operating systems, and utilities must be written and developed, enabling very small computers to execute in the normal environment.

Nanocomputers have the potential to revolutionize the 21st century. Increased investments in nanotechnology could lead to breakthroughs such as molecular computers. Billions of very small and very fast (but cheap) computers networked together can fundamentally change the face of modern IT computing in corporations that today are using mighty mainframes and servers. This miniaturization will also spawn a whole series of consumer-based computing products: computer clothes, smart furniture, and access to the Internet that's a thousand times faster than today's fastest technology.
Nanocomputing's best bet for success today comes from being integrated into existing products, PCs, storage, and networks—and that's exactly what's taking place.
The following list presents just a few of the potential applications of nanotechnology:
• Expansion of mass-storage electronics to huge multi-terabit memory capacity, increasing by a thousand fold the memory storage per unit. Recently, IBM's research scientists announced a technique for transforming iron and a dash of platinum into the magnetic equivalent of gold: a nanoparticle that can hold a magnetic charge for as long as 10 years. This breakthrough could radically transform the computer disk-drive industry.
• Making materials and products from the bottom up; that is, by building them from individual atoms and molecules. Bottom-up manufacturing should require fewer materials and pollute less.
• Developing materials that are 10 times stronger than steel, but a fraction of the weight, for making all kinds of land, sea, air, and space vehicles lighter and more fuel-efficient. Such nanomaterials are already being produced and integrated into products today.
• Improving the computing speed and efficiency of transistors and memory chips by factors of millions, making today's chips seem as slow as the dinosaur. Nanocomputers will eventually be very cheap and widespread. Supercomputers will be about the size of a sugar cube.

Near to online Storage: Nearline Storage

It is an oldest form of storage is any medium that is used to copy and store data from the hard drive to a source that is easily retrieved
The word "nearline" is a contraction of near-online and the term used in computer science to describe an intermediate type of data storage that represents a compromise between online storage (supporting frequent, very rapid access to data) and offline storage/archiving (used for backups or long-term storage, with infrequent access to data).
Nearline storage has many of the same features, performance, and device requirements as online storage. However, nearline storage is deployed for backup support for online storage. Demand for nearline storage is growing rapidly because more information must be archived for regulatory reasons. Nearline storage is commonly used for data backup because it can backed up large volumes of data quickly , which sometimes cannot be achieved with slower bandwidth rates to tape-based solutions. Nearline storage is built using less expensive disk drives such as SATA drives to store information that must be accessed more quickly than is possible through tape or tape libraries.
Both archiving (offline) and nearline allow a reduction of database size that results in improved speed of performance for the online system. However, accessing archived data is more complex and/or slower than is the case with nearline storage, and can also negatively affect the performance of the main database, particularly when the archive data must be reloaded into that database.
There are three major categories of near-line storage: magnetic disk, magnetic tape, and compact disc (CD). Magnetic disks include 3.5-inch diskettes, and various removable media such as the Iomega Zip disk and the Syquest disk. Tapes are available in almost limitless variety. Examples of media in the CD category are CD recordable (CD-R), CD rewriteable (CD-RW), and digital versatile disc rewriteable (DVD-RW).
Near-line storage provides inexpensive, reliable, and unlimited data backup and archiving with somewhat less accessability than with integrated online storage. For individuals and small companies, it can be an ideal solution if the user is willing to tolerate some time delay when storing or retrieving data. Near-line storage media are immune to infection by online viruses, Trojan horses, and worms because the media are physically disconnected from networks, computers, servers, and the Internet. When a near-line storage medium is being employed to recover data, it can be write-protected to prevent infection. But it is suggested that near-line storage media always be scanned with an anti-virus program before use.
The capacity and efficiency of near-line storage options has improved greatly over the years. Magnetic tape is one of the oldest formats still in use. The tape is available in formats that work with a wide range of large systems and are frequently used to create backup files for corporations on a daily basis. The tapes are easily stored and can be used to reload the most recently saved information in the event of a system failure. Magnetic tapes also function as an excellent electronic history, making it possible to research when a given bit of data was entered into the system.
The second type of near-line storage is the magnetic disk. Developed for use with personal computers, but now considered obsolete in many quarters, as well as disks developed for specific purposes such as storing a large quantity of zipped files. Since the early 21st century, most desktop and laptop computers have discontinued the installation of a magnetic disk drive, although mainframes sometimes still make use of some type of magnetic disk.
As the most recent innovation in removable storage options, the CD provides a lot of storage in a small space. The CD encompasses different formats for different file saving activity. The CDR or CD recordable makes it possible to copy a wide range of text and similar type documents. The CD rewritable or CD-RW makes it possible to easily load data onto the disk and also to load the data to another system with ease. The digital versatile disc rewritable or DVD-RW allows the copying of all types of media, including video.
One of the advantages of near-line storage is that these devices offer a means of protecting data from harm. This includes keeping the data free from viruses or bugs that may infect the hard drive at some point. While the hard drive may become corrupted and damage files loaded on the drive, data housed on near-line storage devices remains unaffected and can be used to reload the hard drive once the system is cleansed of any type of malware. Another benefit of near-line storage is the fact that this storage option is extremely inexpensive. Individuals and small companies find that utilizing these simple data storage devices provides a great deal of security and peace of mind without requiring any type of ongoing expense. Once the device is purchased and the storage of data is complete, the information can be filed in a cabinet or a drawer and restored when and as needed.
In the event that a near-line storage device is used frequently to load and unload data, it is a good idea to scan the disk or tape with some type of antivirus software before commencing the activity. There is always the slight possibility that the medium became infected when used last. Scanning and removing any viruses or other potentially damaging files will ensure the virus does not have the chance to proliferate to other systems in the network.
Near-line storage is the on-site storage of data on removable media. The removable storage idea dates back to the mainframe computer and, in the age of the smaller computer, remains popular among individuals, small businesses, and the large enterprise.

Keep the data secure with Offline storage

As the name suggests, Offline storage can be defined, where the data is physically removed from the network and cannot be accessed by the computer. It is commonly referred to as “archive” or “back up” storage and is typically a tape drive or low-end disk drive (virtual tape). Offline or disconnected storage is designed for storage of data for long periods of time, because data is archived, offline storage appliances focus on data accuracy, protection, and security. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.
Off-line storage is used to transfer information, since the detached medium can be easily physically transported. Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is accessed seldom or never, off-line storage is less expensive than tertiary storage.
Backing up to tape is very useful in terms of data created online and then as it ages or becomes less important to the company. Many companies migrates the data to less expensive storage media like tape. When the tapes contain replicas of online data, or it simply needs to be kept for archive purposes, it is often shipped offsite and offline as part of the company's data migration plan. Offline storage meant exclusively tapes on shelves or carted off to vaults. Companies like tape because it's removable, relatively cheap, and has a long shelf life for archival. But many companies would like to see those same attributes apply to disk, and now they can.
Offline storage is very practical from the point of view of data transmission medium, can also serve as a good back-up device since it is remotely located, hence it will not be affected by any disaster that might hit the direct source of data. Offline storage also provides good security for data since you can't easily access it from a computer system.

Wednesday, June 16, 2010

Use of Solid state physics in storage

SSDs have been used in enterprise storage to speed up applications and performance without the cost of adding additional servers and with its solid state memory to store data technology it replaces the traditional hard disk drive.
The name solid state drive is nowhere related to its state of being liquid or solid; actually the term "solid-state" (from solid-state physics) refers to the use of semiconductor devices rather than electron tubes. But in the present context, it’s been used to distinguish solid-state electronics from electromechanical devices. Solid state drives also enjoy greater stability over their disk counterparts because it has no moving parts. With that solid-state drives are less fragile than hard disks and are also silent (unless a cooling fan is used); as there are no mechanical delays, they usually enjoy low access time and latency. For the first time Solid-state drive (SSD) technology has been marketed to the military and niche industrial markets in the mid-1990s.
Most all electronics that we have today are made up of semiconductors and chips. In case of a SSD, it refers to the fact that the primary storage medium is through semiconductors rather than a magnetic media such as a hard drive.
The look of SSD is no different than a traditional hard drive. This design enables the SSD drive to put in a notebook or desktop computer in place of a hard drive. To do this, it needs to have the standard dimension as a 1.8, 2.5 or 3.5-inch hard drive. It also will use either the ATA or SATA drive interfaces so that there is a compatible interface.
Actually this type of storage already exists in the form of flash memory drives that plug into the USB port. And for matter of fact solid state drives and USB flash drives both use the same type of non-volatile memory chips that retain their information even when they have no power. The difference comes in the form factor and capacity of the drives. While a flash drive is designed to be external to the computer system, an SSD is designed to reside inside the computer in place of a more traditional hard drive.
An SSD is commonly composed of DRAM volatile memory or primarily NAND flash non-volatile memory. Most SSD manufacturers use non-volatile flash memory to create more rugged and compact devices for the consumer market. These flash memory-based SSDs, also known as flash drives and don’t require batteries. They are often packaged in standard disk drive form factors (1.8-, 2.5-, and 3.5-inch). One more advantage of it is, non-volatility allows flash SSDs to retain memory even during sudden power outages, ensuring data persistence. But compare to DRAM SSDs, Flash memory SSDs are slower and some designs are even slower than traditional HDDs on large files, but flash SSDs have no moving parts and thus seek times and other delays inherent in conventional electro-mechanical disks are negligible.
SSDs based on volatile memory such as DRAM have ultrafast data access, generally less than 10 microseconds, and are used primarily to accelerate applications that would otherwise be held back by the latency of Flash SSDs or traditional HDDs. DRAM-based SSDs usually incorporate either an internal battery or an external AC/DC adapter and backup storage systems to ensure data persistence while no power is being supplied to the drive from external sources. If power is lost, the battery provides power while all information is copied from random access memory (RAM) to back-up storage. When the power is restored, the information is copied back to the RAM from the back-up storage, and the SSD resumes normal operation. (Similar to the hibernate function used in modern operating systems.) These types of SSD are usually fitted with the same type of DRAM modules used in regular PCs and servers, allowing them to be swapped out and replaced with larger modules.
Flash-based Solid-state drives are very functional and can be used to create network appliances from general-purpose PC hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware. NAND Flash based SSDs offer a potential power saving; however, the typical pattern of usage of normal operations result in cache misses in the NAND Flash as well leading to continued spin of the drive platter or much longer latency if the drive needed to spin up. These devices would be slightly more energy efficient but could not prove to be any better in performance.
On the other hand, Flash-memory drives have limited lifetimes and will often wear out after 1,000,000 to 2,000,000 write cycles (1,000 to 10,000 per cell) for MLC, and up to 5,000,000 write cycles (100,000 per cell) for SLC. Special file systems or firmware designs can mitigate this problem by spreading writes over the entire device, called wear leveling. Other issue of concern is security implications. For example, encryption of existing unencrypted data on flash-based SSDs cannot be performed securely due to the fact that wear leveling causes new encrypted drive sectors to be written to a physical location different from their original location—data remains unencrypted in the original physical location. Apart from these disadvantages capacity of SSDs is currently lower than that of hard drives. However, flash SSD capacity is predicted to increase rapidly, with drives of 1 TB already released for enterprise and industrial applications. The asymmetric read vs. write performance can cause problems with certain functions where the read and write operations are expected to be completed in a similar timeframe. SSDs currently have a much slower write performance compared to their read performance.
As a result of wear leveling and write combining, the performance of SSDs degrades with use. DRAM-based SSDs (but not flash-based SSDs) require more power than hard disks, when operating; they still use power when the computer is turned off, while hard disks do not.
To overshadow these disadvantages, Solid state drives have several advantages over the magnetic hard drives. The majority of this comes from the fact that the drive does not have any moving parts. While a traditional drive has drive motors to spin up the magnetic platters and the drive heads, all the storage on a solid state drive is handled by flash memory chips. This provides distinct advantages like Less Power Usage, Faster Data Access, and Higher Reliability. Solid state drives consume very less power in portable computers. Because there is no power draw for the motors, the drive uses far less energy than the regular hard drive. Another thing is SSD has faster data access, since the drive doesn't have to spin up the drive platter or move drive heads, the data can be read from the drive near instantly. Reliability is also a key factor for portable drives. Hard drive platters are very fragile and sensitive materials. Even small jarring movements from an impact can cause the drive to be completely unreadable. Since the SSD stores all its data in memory chips, there are fewer moving parts to be damaged in any sort of impact.
As with most computer technologies, the primary limiting factor of using the solid state drives in notebook and desktop computers is cost. These drives have actually been available for some time now, but the cost of the drives is roughly the same as the entire notebook they could be installed into.
The other problem affecting the adoption of the solid state drives is capacity. Current hard drive technology can allow for over 200GB of data in a small 2.5-inch notebook hard drive. Most SSD drives announced at the 2007 CES show are of the 64GB capacity. This means that not only are the drives much more expensive than a traditional hard drive, they only hold a fraction of the data.
All of this is set to change soon though. Several companies that specialize in flash memory have announced upcoming products that look to push the capacities of the solid state drives to be closer to that of a normal hard drive but at even lower prices than the current SSDs. This will have a huge impact for notebook data storage. SSD is a rapidly developing technology and Performance of flash SSDs are difficult to benchmark. And surely in the coming years it is going to extend its reach.

King Bluetooth: The name behind the Bluetooth technology



It’s a short-range communications technology that replaces the cables connecting portable or fixed devices while maintaining high levels of security. The key features of Bluetooth technology are robustness, low power, and low cost. The Bluetooth Specification defines a uniform structure for a wide range of devices to connect and communicate with each other

Technological development has reached to a point where devices which are still connected via cables or wires are outdated. And this wireless technology has given a new term called “Bluetooth”. This term is not new to most of us, rather we often use this technology in our day-today life to transfer information and files. This technology is a short-range wireless radio technology that allows electronic devices to connect to one another. The term Bluetooth has a small story behind it that justify its not so usual name that is the developers of this wireless technology first used the name "Bluetooth" as a code name, but as time past, the name stuck. The word "Bluetooth" is actually taken from the 10th century Danish King Harald Bluetooth. King Bluetooth had been influential in uniting Scandinavian Europe during an era when the region was torn apart by wars and feuding clans. Bluetooth technology was first developed in Scandinavia and able to unite differing industries such as the cell phone, computing, and automotive markets. Bluetooth wireless technology simplifies and combines multiple forms of wireless communication into a single, secure, low-power, low-cost, globally available radio frequency.
Bluetooth technology make connections just like cables connect a computer to a keyboard, mouse, or printer, or how a wire connects an MP3 player to headphones, but does it without the cables and wires. With Bluetooth there is no more worrying about which cable goes where, while getting tangled in the mess.
Bluetooth is a packet-based protocol with a master-slave structure. Connections between Bluetooth enabled electronic devices allow these devices to communicate wirelessly through short-range, ad hoc networks known as piconets. Each device in a piconet can also simultaneously communicate with up to seven other devices within that single piconet and each device can also belong to several piconets simultaneously. This means the ways in which you can connect your Bluetooth devices is almost limitless.
Bluetooth actually is one of the secure ways to connect and exchange information between devices such as faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles.
Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 bands of 1 MHz width in the range 2402-2480 MHz. This is in the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band.
Bluetooth technology’s adaptive frequency hopping (AFH) capability was designed to reduce interference between wireless technologies sharing the 2.4 GHz spectrum. AFH works within the spectrum to take advantage of the available frequency. This is done by the technology detecting other devices in the spectrum and avoiding the frequencies they are using. This adaptive hopping among 79 frequencies at 1 MHz intervals gives a high degree of interference immunity and also allows for more efficient transmission within the spectrum.
Bluetooth range may vary depending on class of radio used in an implementation like, Class 3 radios (up to 1 meter or 3 feet), Class 2 radios, most commonly found in mobile devices (10 meters or 33 feet) and Class 1 radios, used primarily in industrial use cases (100 meters or 300 feet).
Wireless Nature
There are tons of benefits of using wireless devices. In addition to improving safety as a result of eliminating the clutter of wires and associated hazardous connections, wireless technology also offers many convenient advantages. For example, when you are traveling with your laptop, PDA, MP3 player and other devices, you no longer have to worry about bringing along all of your connecting cables.
Inexpensive
Bluetooth technology is pocket friendly for companies to implement, which results in lower over-all manufacturing Costs and the ultimate benefit goes to the consumers. The end result: Bluetooth devices are relatively inexpensive.
Automatic
The usage is very simple and one doest have to harass them in order to get it connected. When two or more Bluetooth devices enter a range (Up to 30 feet) of one another, they automatically begin to communicate without you having to do anything. Once the communicating begins, Bluetooth devices will setup Personal Area Networks or Piconets.
Standardized Protocol
Since Bluetooth is a standardized wireless specification, a high level of compatibility among devices is guaranteed. The Bluetooth specification uses and defines various profiles. Every Bluetooth profile is specific to a particular function. For instance, when a Bluetooth enabled cell phone and a Bluetooth headset (Both with the same profile) are communicating with one another, both will understand each other without the user having to do anything, even if the devices are of different models/makes.
Low Interference
Bluetooth devices avoid interference with other wireless devices by Using a technique known as spread-spectrum frequency hopping, and Using low power wireless signals.
Low Energy Consumption
Bluetooth uses low power signals. As a result, the technology requires little energy and will therefore use less battery or electrical power.
Share Voice and Data
The Bluetooth standard allows compatible devices to share both voice and data communications. For example, a Bluetooth enabled cell phone is capable of sharing voice communications with a compatible Bluetooth headset; nevertheless, the same cell phone may also be capable of establishing a GPRS connection to the Internet. Then, using Bluetooth, the phone can connect to a laptop. The result: The laptop is capable of surfing the web or sending and receiving email.
Control
Unless a device is already paired to your device, you have the option to accept or reject the connection and file transfer. This prevents unnecessary or infected files from unknown users from transferring to your device.
Instant Personal Area Network (PAN)
Up to seven compatible Bluetooth devices can connect to one another within proximity of up to 30 feet, forming a PAN or piconet. Multiple piconets can be automatically setup for a single room.
Upgradeable
The Bluetooth standard is upgradeable. A development group, Bluetooth Special Interest Group (SIG) is responsible to offers several new advantages and is backward compatible with the older versions.
Bluetooth has several positive aspects and it’s difficult to find the downside of it, but there are still sum areas that need the attention. Like other communication technologies, Bluetooth also facing the issue of privacy and identity theft. But these issues are easily combatable, and various measures are already in place to provide for the secure use of Bluetooth technology.
Bluetooth has a shortcoming when it comes to file sharing speed. Compared to the up to 4.0MBps rate transfer of infrared, Bluetooth can only reach up to 1.0MBps, meaning that it transfers files slowly. In transferring or sharing larger files at a closer distance, other wireless technology like infrared is better.
With features like handy, portable, sophisticated, easy to handle and connected through some wireless mechanism, Bluetooth can be considered as a future technology that is going to sustain in the market for long.

Friday, June 4, 2010

Green touch to the technology


Green computing is also usually referred to as Green IT. The idea is to have least human impact on the environment. Apart from this, it aims to achieve environmental sustainability through environmentally responsible use of computers and their resources.
21st century is the ear of computers, gadgets, technologies and these are fuming the energy issues. As the topics like global warming, climate change, carbon emission is getting hotter; it’s the time to “go green” not in our regular life but also in technology.

Green computing or green IT, refers to environmentally sustainable computing or IT, efficient use of resources in computing. This term generally relates to the use of computing resources in conjunction with minimizing environmental impact, maximizing economic viability and ensuring social duties. Most of us think computers are nonpolluting and consume very little energy but this is a wrong notion. It is estimated that out of $250 billion per year spent on powering computers worldwide only about 15% of that power is spent computing- the rest is wasted idling. Thus, energy saved on computer hardware and computing will equate tonnes of carbon emissions saved per year. Taking into consideration the popular use of information technology industry, it has to lead a revolution of sorts by turning green in a manner no industry has ever done before.

Green It is "the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems—such as monitors, printers, storage devices, and networking and communications systems—efficiently and effectively with minimal or no impact on the environment. It includes the dimensions of environmental sustainability, the economics of energy efficiency, and the total cost of ownership, which includes the cost of disposal and recycling."

Opportunities lie in green technology like never before in history and organizations are seeing it as a way to create new profit centres while trying to help the environmental cause. The plan towards green IT should include new electronic products and services with optimum efficiency and all possible options towards energy savings. Like recycling computing equipment can keep harmful materials such as lead, mercury, and hexavalent chromium out of landfills, and can also replace equipment that otherwise would need to be manufactured, saving further energy and emissions. power supply are helping fix this by running at 80% efficiency or better. Power management soft-wares also help the computers to sleep or hibernate when not in use. On the far horizon, reversible computing (which also includes quantum computing) promises to reduce power consumption by a factor of several thousand, but such systems are still very much in the laboratories. The best way to recycle a computer, however, is to keep it and upgrade it. Further, it is important to design computers which can be powered with low power obtained from non-conventional energy sources like solar energy, pedaling a bike, turning a hand-crank etc.

Modern IT systems rely upon a complicated mix of networks and hardware; as such, a green computing initiative must cover all of these areas as well. There are considerable economic motivations for companies to take control of their own power consumption; of the power management tools available, one of the most powerful may still be simple, plain, common sense.
Product longevity
PC manufacturing process accounts for 70 % of the natural resources used in the life cycle of a PC. "Look for product longevity, including upgradeability and modularity." For instance, manufacturing a new PC makes a far bigger ecological footprint than manufacturing a new RAM module to upgrade an existing one, a common upgrade that saves the user having to purchase a new computer.
Resource allocation
Algorithms can also be used to route data to data centers where electricity is less expensive. This approach does not actually reduce the amount of energy being used; it only reduces the cost to the company using it. However, a similar strategy could be used to direct traffic to rely on energy that is produced in a more environmentally friendly or efficient way. A similar approach has also been used to cut energy usage by routing traffic away from data centers experiencing warm weather; this allows computers to be shut down to avoid using air conditioning.
Virtualization
Computer virtualization refers to the abstraction of computer resources, such as the process of running two or more logical computer systems on one set of physical hardware
Terminal servers
Terminal servers have also been used in green computing. When using the system, users at a terminal connect to a central server; all of the actual computing is done on the server, but the end user experiences the operating system on the terminal. These can be combined with thin clients, who use up to 1/8 the amount of energy of a normal workstation, resulting in a decrease of energy costs and consumption.
Power management
The Advanced Configuration and Power Interface (ACPI), an open industry standard, allows an operating system to directly control the power-saving aspects of its underlying hardware. This allows a system to automatically turn off components such as monitors and hard drives after set periods of inactivity. In addition, a system may hibernate, where most components (including the CPU and the system RAM) are turned off.
With this green vision, the industry has been focusing on power efficiency throughout the design and manufacturing process of its products using a range of clean-computing strategies, and the industry is striving to educate markets on the benefits of green computing for the sake of the environment, as well as productivity and overall user experience. And few initiatives will change the overall structure:
Carbon-free computing
The idea is to reduce the "carbon footprint" of users — the amount of greenhouse gases produced, measured in units of carbon dioxide (CO2). PC products certified carbon free, taking responsibility for the amounts of CO2 they emit. The company’s use of silicon-on-insulator (SOI) technology in its manufacturing, and strained silicon capping films on transistors (known as “dual stress liner” technology), have contributed to reduced power consumption in its products.
Solar Computing
Solar cells fit power-efficient silicon, platform, and system technologies and enable to develop fully solar-powered devices that are nonpolluting, silent, and highly reliable. Solar cells require very little maintenance throughout their lifetime, and once initial installation costs are covered, they provide energy at virtually no cost.
Energy-efficient computing
Green-computing initiative is the development of energy-efficient platforms for low-power, small-form-factor (SFF) computing devices. In 2005, VIA company introduced the VIA C7-M and VIA C7 processors that have a maximum power consumption of 20W at 2.0GHz and an average power consumption of 1W. These energy-efficient processors produce over four times less carbon during their operation and can be efficiently embedded in solar-powered devices. Intel, the world's largest semiconductor maker, revealed eco-friendly products at a recent conference in London. The company uses virtualization software, a technique that enables Intel to combine several physical systems into a virtual machine that runs on a single, powerful base system, thus significantly reducing power consumption.
Power supply
Desktop computer power supplies (PSUs) are generally 70–75% efficient, dissipating the remaining energy as heat. An industry initiative called 80 PLUS certifies PSUs that are at least 80% efficient; typically these models are drop-in replacements for older, less efficient PSUs of the same form factor. As of July 20, 2007, all new Energy Star 4.0-certified desktop PSUs must be at least 80% efficient.
Storage
Smaller form factor (e.g. 2.5 inch) hard disk drives often consume less power per gigabyte than physically larger drives.


Green use — reducing the energy consumption of computers and other information systems as well as using them in an environmentally sound manner
Green disposal — refurbishing and reusing old computers and properly recycling unwanted computers and other electronic equipment
Green design — designing energy-efficient and environmentally sound components, computers, servers, cooling equipment, and data centers
Green manufacturing — manufacturing electronic components, computers, and other associated subsystems with minimal impact on the environment

Store it in a new way: green way


Green storage refers to a broad spectrum of solutions ranging from sheer hardware efficiency to more application-level software
IT has already made inroads into controlling energy costs associated with servers and to achieve such goals it is chanting the Green mantra everywhere and the latest is Green storage technologies such as virtualization, thin provisioning, deduplication and disk technologies that can significantly reduce the power and cooling costs in a data center Most storage technologies being marketed as green today are really technologies designed to improve storage utilization, which enables organizations to store their data on fewer disk drives and, in turn, reduce total cost of ownership (TCO), as well as power and cooling requirements.
Ever since data centers are the largest energy consumers in most organizations, they have understandably become a focal point for trying to reduce energy costs. Yet, while storage is part of the data center, it has escaped much of this intense focus so far. This is primarily due to the growing number of high-density servers in the data center during the past several years, which have naturally captured the most attention as organizations seek to resolve the most glaring power and cooling issues. Furthermore, responsibility for data center power and cooling costs has traditionally resided within the corporate facilities budget-meaning that many data center managers are neither directly responsible for, nor at times even aware of, the costs that they generate.
Storage Solutions that currently marketed as green are mainly divided into two sections, technologies that increase storage utilization, enabling users to store more data with fewer disk drives and include technologies and solutions that directly reduce power and/or cooling costs or are inherently green. Technologies that fall into the first category make the easiest business case. These are technologies that organizations should be using anyway, if available, because they directly and immediately save resources. Technologies in the second category are specifically targeted to address the power and cooling issues for storage rather than providing power and cooling benefits as a byproduct of something else. Technologies that increase storage utilization rates are considered green because they help users to store more data with fewer disk drives, an efficiency that automatically reduces power and cooling requirements.
Storage virtualization
This is the ability to present a file, volume or storage device in such a way that its physical complexity is hidden, and the application and the storage administrator see a pool of available resources instead of separate silos of dedicated storage.
Thin provisioning
Thin provisioning (TP) is a method of optimizing the efficiency with which the available space is utilized in storage area networks (SAN). TP operates by allocating disk storage space in a flexible manner among multiple users, based on the minimum space required by each user at any given time.
Thin-provisioning-aware replication
Thin-provisioning-aware replication (sometimes called thin replication) enhances remote replication capabilities so that only the allocated space to which users have written data is transmitted to the secondary site.
Data reduction techniques
Whether they are called file-level single instance store (SIS), data deduplication, data compression or redundant file elimination, the intent of data reduction techniques is to reduce the amount of capacity needed to store a given amount of information. It is especially useful in backup or archiving scenarios. Backup, for example, tends to be a particularly wasteful activity. Often, the data change rate is less than 10% of new and modified files per week. This means that the weekly full backups are sending and storing at least 90% unnecessary data, in addition to the redundancies in data throughout the week. Gartner considers data reduction a transformational technology and rates it as one of the fastest deployed storage technologies that the market has seen in more than a decade.
Boot from Storage Area Network (SAN)
This reduces the need for internal disk drives in servers (especially in rack server and blade server environments), by allowing the boot image to reside in the SAN. It also improves server reliability (no disks) and server availability (again, because there are no disks), and it helps in rapid server imaging. When coupled with thin provisioning and thin-provisioning-aware replication, multiple boot images can be stored with great space efficiency.
Quality of Service
These optimize the use of disk storage system resources by implementing user-defined policies and/or adaptive algorithms to maximize application performance while minimizing back-end storage costs. Quality of Service (QoS) storage features improved performance and throughput using a variety of techniques, including cache partitioning and binding, and input/output (I/O) prioritization. It also has the potential to improve service-level agreements (SLAs) because more data can economically be stored online. QoS storage features also provide cost savings when coupled with virtualization.
Inherently green storage technologies
While efforts to develop technologies specifically designed to reduce power consumption or cooling requirements are in their infancy from a storage hardware perspective, there are some storage solutions on the market that are inherently green. These technologies should be evaluated for their potential benefit in an organization's specific environment and considered when making purchase decisions where those benefits are judged to be high.
Massive Arrays of Idle Disks
Massive Arrays of Idle Disks (MAIDs) store data on a large group of disk drives that can be spun down when not in use. It can also be used to spin down disks (and save power and cooling costs) during non business hours in companies that do not run a 24/7 operation, or as a third tier of storage. New and future MAID implementations may incorporate intelligent power management (IPM) techniques that allow different degrees of spin-down to increase the user's options for power savings and response times. The three IPM levels include heads unloaded, heads unloaded and drive slowed to 4,000 rpm, and sleep mode/power on, where the drives stop spinning altogether. In addition to power and cooling savings, MAID and IPM approaches can also prolong the lives of disk drives. Combined with data reduction techniques, such as data deduplication, MAID storage provides a compelling green storage solution.
Small form factor disk drives
These are 2.5-inch hard drives that, in addition to increasing storage spindle density per square foot, reduce the number of voltage conversions within the system because they require less voltage than today's 3.5-inch-high rpm disk drives.
Airflow-enhanced cabinetry
A focus in the server environment for several years, this has begun to show up more in storage disk arrays. These designs do nothing to improve capacity, utilization or performance. Rather, they are designed to improve cooling, with the goal of positively affecting power and cooling issues (such as data center configuration) and costs.
Although the concept is really promising, this technology is highly complex and may require delicate tuning to achieve true energy savings. For example, difficulty in accurately predicting idle periods can result in disks spinning up soon after they were spun down, resulting in less energy conservation than anticipated. Furthermore, there may be risks that drive mechanics will become less reliable with repeated spinning up and spinning down. In the absence of conclusive evidence that proves the long-term feasibility of turning off disk drives, customers are hesitant to adopt such technologies in haste. But keep in mind that the Environmental issues are gaining serious commercial momentum and, fueled by the growing number of local and global green initiatives, they are rising ever more insistently up the corporate agenda.

The new era networking: Green Networking


There is no formal definition of “green” in networking. In simple term it’s seen as the green practices of selecting energy-efficient networking technologies and products, and minimizing resource use whenever possible. Green networking" -- the practice of consolidating devices, relying more on telecommuting and videoconferencing, and using virtualization to reduce power consumption across the network -- is an offshoot of the trend towards "greening" just about everything from cars to coffee cups. That trend has encompassed IT in general, the data center and the network.
Green Networking can be the way to help reduce carbon emissions by the Information Technology (IT) Industry. Green Networking covers all aspects of the network (personal computers, peripherals, switches, routers, and communication media). Energy efficiencies of all network components must be optimized to have a significant impact on the overall energy consumption by these components. Consequently, these efficiencies gained by having a Green Network will reduce CO2 emissions and thus will help mitigate global warming. The Life Cycle Assessment (LCA) of the components must be considered. LCA is the valuation of the environmental impacts on a product from cradle to grave. New ICT technologies must be explored and the benefits of these technologies must be assessed in terms of energy efficiencies and their associated benefits in minimizing the environmental impact of ICT. Desktop computers and monitors consume 39% of all electrical power used in ICT. In 2002, this equated to 220Mt (millions tons of CO2 emission).

To reduce power consumption and equivalent CO2 emissions from a network switch, several techniques are available. To reduce the carbon footprint of desktop PCs, their usage must be efficiently managed. Old Cathode Ray Tube monitors should be replaced with Liquid Crystal Display screens which reduce monitor energy consumption by as much as 80%. Replacing all desktop PCs with laptops would achieve a 90% decrease in power consumption. Energy can also be saved by using power saving software installed on desktops and running all the time. The power saving software controls force PCs to go into standby when not in use. Another option is to use solid state hard drives that use 50% less power than mechanical hard drives.
When considering the Local Area Network (LAN) network infrastructure, probably the most power hungry device is the network switch. PoE (Power over Ethernet) is a relative new technology introduced into modern network switches. PoE switch ports provide power for network devices as well as transmit data. PoE switch ports are used by IP phones, wireless LAN access points, and other network-attached equipment. PoE switch port can provide power to a connected device and can scale back power when not required.
Another solution is to use power management software built into the network switch. With power management software, we can instruct the network switch to turn off ports when not in use, this would equate to a saving of 15.4W × 16 hours × 365 days = 89,936 kilowatt-hours per port per year.
As networks became more critical in daily business operations, additional network services were required. Network infrastructure devices were required to support VPNs (Virtual Private Networks) and data encryption also. The new integrated network infrastructure with its network services will make the network more energy efficient and reduce the carbon footprint of the network infrastructure.
Due to the high power consumption by Data Centers, there are some proposed solutions to save energy and make Data Centers more energy efficient. Some of the solutions include; taking the Data Center to the power source instead of taking the power source to the Data Center, consolidation, virtualization, improved server and storage performances, power management, high efficiency power supplies, improved data center design.

Traditionally the electrical power needed for Data Centers is supplied by the electricity grid. Using alternate energy sources at the Data Center is often impractical. The solution is to take the Data Center to the energy source. The energy source could be solar, wind, geothermal, or some combination of these alternate forms of energy. Instead of the power traveling great distances, the data would need to travel great distances. For this to be feasible, we would require a broadband network infrastructure.
Consolidation
Going through a systematic program of consolidating and optimizing your machines and workloads can achieve increased efficiencies at the Data Center.
Virtualization
Virtualization is one of the main technologies used to implement a “Green Network”. Virtualization is a technique used to run multiple virtual machines on a single physical machine, sharing the resources of that single computer across multiple environments. Virtualization allows pooling of resources, such as computing and storage that are normally underutilized. Virtualization offers the following advantages: less power, less cooling, less facilities, and less network infrastructure. Virtualization can also be used to replace the desktop. With desktop virtualization one can use a thin client consuming little power (typically 4 Watts). The image and all other programs required by the client can be downloaded from one of the virtualization servers.
Improved Server and Storage Performances
New multicore processors execute at more than four times the speed compared to previous processors and use new high speed disk arrays with high performance. 144-gigabyte Fiber Channel drives can reduce transfer and improve efficiencies within the Data Center.
Power Management
It is estimated that Servers use up to 30% of their peak electricity consumption when they are idle. Although power management tools are available they are not necessarily being implemented. Many new CPU chips have the capacity to scale back voltage and clock frequency on a per-core basis and this can be done by reducing power supply to the memory. By implementing power management techniques, companies can save energy and cost.
High Efficiency Power Supplies
The use of high efficiency power supplies should be considered in all Data Center devices. Poor quality power supplies not only have low power efficiencies, but the power efficiency is also a function of utilization. With low utilization we achieve lower efficiency in the power supply. For every watt of electrical power wasted in a Data Center device, another watt is used in extra cooling. Therefore, investing in high efficient power supplies can double power savings. Another issue with power supply is that quite often Data Center designers overestimate power supply needs. With more accurate assessment of the power requirements of a device, we can achieve high efficiency and energy savings.
Cloud Computing
“Cloud Computing” can be considered “Green Networking” through the efficiencies gained using “Cloud Computing”. “Cloud Computing” offers the following advantages: consolidation—redundancy and waste, abstraction—decoupling workload from physical infrastructures, automation— removing manual labor from runtime operations, utility Computing—enabling service providers to offer storage and virtual servers that ICT companies can access on demand.

Green networking practices include:
Implementing virtualization
Practicing server consolidation
Upgrading older equipment for newer, more energy-efficient products
Employing systems management to increase efficiency
Substituting telecommuting, remote administration and videoconferencing for travel
High Efficiency Power Supplies
Improved Data Center Design

The apparition of a Green Network is one where we all will be connected via wireless to the Internet, using low energy consumption, where all our data is securely stored in highly efficient, reliable Data Centers typically running at low energy per Gigabit per second speed. This can also include access to network services from Cloud computing service providers. Whatever the future is, Green Networking will help reduce the carbon footprint of the IT industry and hopefully lead the way in a cultural shift that all of us need to make if we are to reverse the global warming caused by human emissions of greenhouse gases. Finally, the issue of Efficiency versus Consumption is an interesting argument, that is, efficiency drives consumption. IT solutions can solve efficiency; it is society that must solve consumption.

An arduous profession: Network administrator


Network administrators are teasingly referred to as the highest level of techie you get before you get turned into a fatty belly boss and made into management.
Network administration is a grueling profession where the person is accountable for the maintenance of computer hardware and software that comprises a computer network. This normally includes the deployment, configuration, maintenance and monitoring of active network equipment. A related role is that of the network specialist, or network analyst, who concentrates on network design and security. And the responsibility doesn’t limit only to this but often involve many different aspects and may include such tasks as network design, management, troubleshooting, backup and storage, documentation, security and virus prevention as well as managing users.
The actual role of the Network Administrator will vary from company to company, but will commonly include activities and tasks such as network address assignment, assignment of routing protocols and routing table configuration as well as configuration of authentication and authorization – directory services. It often includes maintenance of network facilities in individual machines, such as drivers and settings of personal computers as well as printers and such. It sometimes also includes maintenance of certain network servers: file servers, VPN gateways, intrusion detection systems, etc. but the common responsibilities include:
Oversee administration of networks
Designs, manages and maintains LAN network server, IBM AS/400 application and data servers, SQL server, state interface server, E911 phone company interface server and remote access devices; Develops and monitors system security procedures to protect the system from physical harm, viruses, unauthorized users, and data damage; Conducts the installation, configuration and maintenance of servers, network hardware and software; Establishes and maintains network user profiles, user environment, directories, and security.
Provide system support
Implements and maintains connectivity standards allowing PCs to communicate with network and server applications; maintains a technical inventory of current configuration of all servers, PCs, shares, printers and software installations; prepares and maintains accurate and detailed problem/resolution records. Tracks frequency and nature of problems; assists the System Analyst with second-level user support when necessary.
3. Identify and recommend computer system needs
Conduct product evaluations of upgraded or new hardware and software identifying strengths, weaknesses, and potential benefits; assist with on-going statistical analysis of system load, to determine optimal operating efficiencies and assist in capacity planning algorithms.
Performs additional duties as needed
Provides assistance to management and users regarding NIBRS and NCIC connectivity as applied in the application software; assesses user accounts, upgrades, removes and configures network printing devices, directory structures, rights, network security and software on file servers; performs network troubleshooting to isolate and diagnose problems while maintaining minimal system outages.
To become a successful network administrator one should have a Bachelors degree in Computer Science and thorough knowledge of server level operations and software principles utilizing Windows NT 4.0 to 2003 and Linux/Unix operating systems; good knowledge and experience with LAN systems and hardware such as Cisco and HP, including experience with managed switches and VLAN capability preferred; knowledge of local area networks and wide area networks, including experience with networking essentials such as DNS, DHCP, NAT, WINS, packet filtering and advanced routing; in-depth knowledge of application packages must include Antivirus, backup routines, network sharing, group e-mail suites and open source software; knowledge of current network and computer system security practices; ability to install and maintain a variety of operating systems as well as other related hardware and software; ability to clearly and concisely communicate technical information to non-technical users at all organizational levels; ability to accurately prepare and maintain various records, reports, correspondence and other departmental documents; ability to establish and maintain effective working relationships and exercise tact when dealing with governmental officials, outside agencies, co-workers and supervisors.