Sunday, June 20, 2010

Contributing towards the new knowledge based economy: KPO



The evolution and maturity of the Indian BPO sector that has given rise to yet another wave in the global outsourcing scenario: KPO or Knowledge Process Outsourcing.

In today’s knowledge era, following the footstep of BPO, KPO emerged as the next big outsourcing sector in the market and contributing heavily to the economy. KPO stands for Knowledge Process Outsourcing. KPO includes processes that demand advanced information search, analytical, interpretation and technical skills as well as some judgment and decision-making. The KPO can be considered as one step extension of Business Processing Outsourcing (BPO) because BPO Industry is shaping into Knowledge Process Outsourcing because of its favorable advantageous and future scope. But, it’s not a 'B' replaced by a 'K'. In fact, Knowledge process can be defined as high added value processes chain where the achievement of objectives is highly dependent on the skills, domain knowledge and experience of the people carrying out the activity. And when this activity gets outsourced a new business activity emerges, which is generally known as Knowledge Process Outsourcing. One can call it a high-end activity, which is likely to boom in coming years. There is a huge potential in this field of knowledge.
The whole concept of KPO is information driven. It means that it is a continuous process of creation and dissemination of information by bringing together the information industry leaders to create knowledge and see meaning in information and its context. The KPO typically involves a component of Business Processing Outsourcing (BPO), Research Process Outsourcing (RPO) and Analysis Proves Outsourcing (APO). KPO business entities provide typical domain-based processes, advanced analytical skills and business expertise, rather than just process expertise. KPO Industry is handling more amount of high skilled work other than the BPO Industry. While KPO derives its strength from the depth of knowledge, experience and judgment factor; BPO in contrast is more about size, volume and efficiency.
Fields of work that the KPO industry focuses on include intellectual property or patent research, content development, R&D in pharmaceuticals and biotechnology, market research, equity research, data research, database creation, analytical services, financial modeling, design and development in automotive and aerospace industries, animation and simulation, medical content and services, remote education, publishing and legal support. MBAs, PhDs, engineers, doctors, lawyers and other specialists are expected to be much in demand.
Most low-level BPO jobs provide support for an organization's core competencies and entry-level prerequisites are simply a command of English (or applicable language) and basic computer skills. Knowledge process outsourcing jobs, in comparison, are typically integrated with an organization's core competencies. The jobs involve more complex tasks and may require an advanced degree and/or certification. Examples of KPO include accounting, market and legal research, Web design and content creation.
The success achieved by many overseas companies in outsourcing business process operations to India has encouraged many of the said companies to start outsourcing their high-end knowledge work as well. Cost savings, operational efficiencies, availability of and access to a highly skilled and talented workforce and improved quality are all underlying expectations in outsourcing high-end processes to India. The success in outsourcing business process operations to India has encouraged many firms to start outsourcing their high-end knowledge work as well. Cost savings, operational efficiencies, access to a highly talented workforce and improved quality are all underlying expectations in offshoring high-end processes to India.
KPO delivers high value to organizations by providing domain-based processes and business expertise rather than just process expertise. These processes demand advanced analytical and specialized skill of knowledge workers that have domain experience to their credit. Therefore outsourcing of knowledge processes face more challenges than BPO (Business Process Outsourcing). Some of the challenges involved in KPO will be maintaining higher quality standards, investment in KPO infrastructure, the lack of talent pool, requirement of higher level of control, confidentiality and enhanced risk management. Comparing these challenges with the Indian IT and ITES service providers, it is not surprising that India has been ranked the most preferred KPO outsourcing destination owing to the country's large talent pool, quality IT training, friendly government policies and low labor costs.
Even the Indian government has recognized that knowledge processes will influence economic development extensively in the future & has taken remarkable measures towards liberalization and deregulation. Recent reforms have reduced licensing requirements, made foreign technology accessible, removed restrictions on investment and made the process of investment much easier. The government has been continuously improving infrastructure with better roads, setting up technology parks, opening up telecom for enhanced connectivity, providing uninterrupted power to augment growth.
The last five years have seen vast development in Knowledge Parks, with infrastructure of global standards, in cities like Chennai, Bangalore and Gurgaon. Multitenanted ‘intelligent’ buildings, builtto- suit facilities, sprawling campuses are tailor made to suit customer requirements.
Knowledge Process Outsourcing has proven to be a blessing in disguise to efficiently increase productivity and increase cost savings in the area of market research. Organizations are adopting outsourcing to meet their market research needs and the trend is all set to take the global market research industry by storm. India's intellectual potential is the key factor for India being the favored destination for KPO industry.
A major reason why companies in India will have no option but to move up the value chain from BPO to KPO is quite simple. By 2010, India may have become too costly to provide low-end services at competitive costs. For example, Evalueserve says Indian salaries have increased at an average of 14 per cent a year. The number of professionals working in the offshore industry is expected to increase as more and more companies decide to become involved in BPO and KPO. This will further drive the trend towards the migration of low-end services to high-end services, especially as offshore service vendors (as well as professionals working in this sector) gain experience and capabilities to provide high-value services.
The future of the KPO is very bright. Surveys have shown that the global KPO industry is expected to reach nearly 17 billion dollars by the end of 2010, of which approximately 12 billion dollars worth of business will be outsourced to India. What’s more, the Indian KPO industry is expected to employ an added figure of roughly 250,000 KPO professionals by the end of 2010, as compared to the current estimated figure of 25,000 employees. Predictions have been made that India will acquire nearly 70 percent of the KPO outsourcing sector and it has a high potential as it is not restricted only to Information Technology (IT) or Information Technology Enabled Services (ITES) sectors, and includes other sectors like Intellectual Property related services, Business Research and Analytics, Legal Research, Clinical Research, Publishing, Market Research etc.

Computing with nanotechnology: Nanocomputing



Nanocomputer is the logical name for a computer smaller than the microcomputer, which is smaller than the minicomputer. More technically, it is a computer whose fundamental parts are no bigger than a few nanometers.

The world has been moving faster from mini to micro and latest is the nano technology. A nanometer is a unit of measure equal to a billionth of a meter. Ten atoms fit side by side in a nanometer. Nanotechnology today is an emerging set of tools, techniques, and unique applications involving the structure and composition of materials on a nanoscale. Nanotechnology, the art of manipulating materials on an atomic or molecular scale to build microscopic devices such as robots, which in turn will assemble individual atoms and molecules into products much as if they were Lego blocks. Nanotechnology is about building things one atom at a time, about making extraordinary devices with ordinary matter. A nanocomputer is a computer whose physical dimensions are microscopic. The field of nanocomputing is part of the emerging field of nanotechnology. Nanocomputing describes computing that uses extremely small, or nanoscale, devices (one nanometer [nm] is one billionth of a meter).
A nanocomputer is similar in many respects to the modern personal computer, but on a scale that's very much smaller. With access to several thousand (or millions) of nanocomputers, depending on users needs or requirements gives a whole new meaning to the expression "unlimited computing" users may be able to gain a lot more power for less money. Several types of nanocomputers have been suggested or proposed by researchers and futurists. Electronic nanocomputers would operate in a manner similar to the way present-day microcomputers work. The main difference is one of physical scale. More and more transistors are squeezed into silicon chips with each passing year; witness the evolution of integrated circuits (IC s) capable of ever-increasing storage capacity and processing power. The ultimate limit to the number of transistors per unit volume is imposed by the atomic structure of matter. Most engineers agree that technology has not yet come close to pushing this limit. In the electronic sense, the term nanocomputer is relative. By 1970s standards, today's ordinary microprocessors might be called nanodevices.
Chemical and biochemical nanocomputers have the power to store and process information in terms of chemical structures and interactions. Biochemical nanocomputers already exist in nature; they are manifest in all living things. But these systems are largely uncontrollable by humans. The development of a true chemical nanocomputer will likely proceed along lines similar to genetic engineering. Engineers must figure out how to get individual atoms and molecules to perform controllable calculations and data storage tasks.
Mechanical nanocomputers would use tiny moving components called nanogears to encode information. Such a machine is suggestive of Charles Babbage’s analytical engines of the 19th century. For this reason, mechanical nanocomputer technology has sparked controversy; some researchers consider it unworkable. All the problems inherent in Babbage's apparatus, according to the naysayers, are magnified a millionfold in a mechanical nanocomputer. Nevertheless, some futurists are optimistic about the technology, and have even proposed the evolution of nanorobots that could operate, or be controlled by, mechanical nanocomputers.
A quantum nanocomputer would work by storing data in the form of atomic quantum states or spin. Technology of this kind is already under development in the form of single-electron memory (SEM) and quantum dots. The energy state of an electron within an atom, represented by the electron energy level or shell, can theoretically represent one, two, four, eight, or even 16 bits of data. The main problem with this technology is instability. Instantaneous electron energy states are difficult to predict and even more difficult to control. An electron can easily fall to a lower energy state, emitting a photon; conversely, a photon striking an atom can cause one of its electrons to jump to a higher energy state.
There are several ways nanocomputers might be built, using mechanical, electronic, biochemical, or quantum technology. It is unlikely that nanocomputers will be made out of semiconductor transistors (Microelectronic components that are at the core of all modern electronic devices), as they seem to perform significantly less well when shrunk to sizes under 100 nanometers.
Computing systems implemented with nanotechnology will need to employ defect- and fault-tolerant measures to improve their reliability due to the large number of factors that may lead to imperfect device fabrication as well as the increased susceptibility to environmentally induced faults when using nanometer-scale devices. Researchers have approached this problem of reliability from many angles and this survey will discuss many promising examples, ranging from classical fault-tolerant techniques to approaches specific to nanocomputing. The research results summarized here also suggest that many useful, yet strikingly different solutions may exist for tolerating defects and faults within nanocomputing systems. Also included in the survey are a number of software tools useful for quantifying the reliability of nanocomputing systems in the presence of defects and faults.
The potential for a microscopic computer appears to be endless. Along with use in the treatment of many physical and emotional ailments, the nanocomputer is sometimes envisioned to allow for the ultimate in a portable device that can be used to access the Internet, prepare documents, research various topics, and handle mundane tasks such as email. In short, all the functions that are currently achieved with desktop computers, laptops, and hand held devices would be possible with a nanocomputer that is inserted into the body and directly interacts with the brain.
Despite the hype about nanotechnology in general and nanocomputing in particular, a number of significant barriers must be overcome before any progress can be claimed.
Work is needed in all areas associated with computer hardware and software design:
• Nanoarchitectures and infrastructure
• Communications protocols between multiple nanocomputers, networks, grids, and the Internet
• Data storage, retrieval, and access methods
• Operating systems and control mechanisms
• Application software and packages
• Security, privacy, and accuracy of data
• Circuit faults and failure management
Basically, the obstacles can be divided into two distinct areas:
Hardware: the physical composition of a nanocomputer, its architecture, its communications structure, and all the associated peripherals
Software: new software, operating systems, and utilities must be written and developed, enabling very small computers to execute in the normal environment.

Nanocomputers have the potential to revolutionize the 21st century. Increased investments in nanotechnology could lead to breakthroughs such as molecular computers. Billions of very small and very fast (but cheap) computers networked together can fundamentally change the face of modern IT computing in corporations that today are using mighty mainframes and servers. This miniaturization will also spawn a whole series of consumer-based computing products: computer clothes, smart furniture, and access to the Internet that's a thousand times faster than today's fastest technology.
Nanocomputing's best bet for success today comes from being integrated into existing products, PCs, storage, and networks—and that's exactly what's taking place.
The following list presents just a few of the potential applications of nanotechnology:
• Expansion of mass-storage electronics to huge multi-terabit memory capacity, increasing by a thousand fold the memory storage per unit. Recently, IBM's research scientists announced a technique for transforming iron and a dash of platinum into the magnetic equivalent of gold: a nanoparticle that can hold a magnetic charge for as long as 10 years. This breakthrough could radically transform the computer disk-drive industry.
• Making materials and products from the bottom up; that is, by building them from individual atoms and molecules. Bottom-up manufacturing should require fewer materials and pollute less.
• Developing materials that are 10 times stronger than steel, but a fraction of the weight, for making all kinds of land, sea, air, and space vehicles lighter and more fuel-efficient. Such nanomaterials are already being produced and integrated into products today.
• Improving the computing speed and efficiency of transistors and memory chips by factors of millions, making today's chips seem as slow as the dinosaur. Nanocomputers will eventually be very cheap and widespread. Supercomputers will be about the size of a sugar cube.

Near to online Storage: Nearline Storage

It is an oldest form of storage is any medium that is used to copy and store data from the hard drive to a source that is easily retrieved
The word "nearline" is a contraction of near-online and the term used in computer science to describe an intermediate type of data storage that represents a compromise between online storage (supporting frequent, very rapid access to data) and offline storage/archiving (used for backups or long-term storage, with infrequent access to data).
Nearline storage has many of the same features, performance, and device requirements as online storage. However, nearline storage is deployed for backup support for online storage. Demand for nearline storage is growing rapidly because more information must be archived for regulatory reasons. Nearline storage is commonly used for data backup because it can backed up large volumes of data quickly , which sometimes cannot be achieved with slower bandwidth rates to tape-based solutions. Nearline storage is built using less expensive disk drives such as SATA drives to store information that must be accessed more quickly than is possible through tape or tape libraries.
Both archiving (offline) and nearline allow a reduction of database size that results in improved speed of performance for the online system. However, accessing archived data is more complex and/or slower than is the case with nearline storage, and can also negatively affect the performance of the main database, particularly when the archive data must be reloaded into that database.
There are three major categories of near-line storage: magnetic disk, magnetic tape, and compact disc (CD). Magnetic disks include 3.5-inch diskettes, and various removable media such as the Iomega Zip disk and the Syquest disk. Tapes are available in almost limitless variety. Examples of media in the CD category are CD recordable (CD-R), CD rewriteable (CD-RW), and digital versatile disc rewriteable (DVD-RW).
Near-line storage provides inexpensive, reliable, and unlimited data backup and archiving with somewhat less accessability than with integrated online storage. For individuals and small companies, it can be an ideal solution if the user is willing to tolerate some time delay when storing or retrieving data. Near-line storage media are immune to infection by online viruses, Trojan horses, and worms because the media are physically disconnected from networks, computers, servers, and the Internet. When a near-line storage medium is being employed to recover data, it can be write-protected to prevent infection. But it is suggested that near-line storage media always be scanned with an anti-virus program before use.
The capacity and efficiency of near-line storage options has improved greatly over the years. Magnetic tape is one of the oldest formats still in use. The tape is available in formats that work with a wide range of large systems and are frequently used to create backup files for corporations on a daily basis. The tapes are easily stored and can be used to reload the most recently saved information in the event of a system failure. Magnetic tapes also function as an excellent electronic history, making it possible to research when a given bit of data was entered into the system.
The second type of near-line storage is the magnetic disk. Developed for use with personal computers, but now considered obsolete in many quarters, as well as disks developed for specific purposes such as storing a large quantity of zipped files. Since the early 21st century, most desktop and laptop computers have discontinued the installation of a magnetic disk drive, although mainframes sometimes still make use of some type of magnetic disk.
As the most recent innovation in removable storage options, the CD provides a lot of storage in a small space. The CD encompasses different formats for different file saving activity. The CDR or CD recordable makes it possible to copy a wide range of text and similar type documents. The CD rewritable or CD-RW makes it possible to easily load data onto the disk and also to load the data to another system with ease. The digital versatile disc rewritable or DVD-RW allows the copying of all types of media, including video.
One of the advantages of near-line storage is that these devices offer a means of protecting data from harm. This includes keeping the data free from viruses or bugs that may infect the hard drive at some point. While the hard drive may become corrupted and damage files loaded on the drive, data housed on near-line storage devices remains unaffected and can be used to reload the hard drive once the system is cleansed of any type of malware. Another benefit of near-line storage is the fact that this storage option is extremely inexpensive. Individuals and small companies find that utilizing these simple data storage devices provides a great deal of security and peace of mind without requiring any type of ongoing expense. Once the device is purchased and the storage of data is complete, the information can be filed in a cabinet or a drawer and restored when and as needed.
In the event that a near-line storage device is used frequently to load and unload data, it is a good idea to scan the disk or tape with some type of antivirus software before commencing the activity. There is always the slight possibility that the medium became infected when used last. Scanning and removing any viruses or other potentially damaging files will ensure the virus does not have the chance to proliferate to other systems in the network.
Near-line storage is the on-site storage of data on removable media. The removable storage idea dates back to the mainframe computer and, in the age of the smaller computer, remains popular among individuals, small businesses, and the large enterprise.

Keep the data secure with Offline storage

As the name suggests, Offline storage can be defined, where the data is physically removed from the network and cannot be accessed by the computer. It is commonly referred to as “archive” or “back up” storage and is typically a tape drive or low-end disk drive (virtual tape). Offline or disconnected storage is designed for storage of data for long periods of time, because data is archived, offline storage appliances focus on data accuracy, protection, and security. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.
Off-line storage is used to transfer information, since the detached medium can be easily physically transported. Additionally, in case a disaster, for example a fire, destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is accessed seldom or never, off-line storage is less expensive than tertiary storage.
Backing up to tape is very useful in terms of data created online and then as it ages or becomes less important to the company. Many companies migrates the data to less expensive storage media like tape. When the tapes contain replicas of online data, or it simply needs to be kept for archive purposes, it is often shipped offsite and offline as part of the company's data migration plan. Offline storage meant exclusively tapes on shelves or carted off to vaults. Companies like tape because it's removable, relatively cheap, and has a long shelf life for archival. But many companies would like to see those same attributes apply to disk, and now they can.
Offline storage is very practical from the point of view of data transmission medium, can also serve as a good back-up device since it is remotely located, hence it will not be affected by any disaster that might hit the direct source of data. Offline storage also provides good security for data since you can't easily access it from a computer system.

Wednesday, June 16, 2010

Use of Solid state physics in storage

SSDs have been used in enterprise storage to speed up applications and performance without the cost of adding additional servers and with its solid state memory to store data technology it replaces the traditional hard disk drive.
The name solid state drive is nowhere related to its state of being liquid or solid; actually the term "solid-state" (from solid-state physics) refers to the use of semiconductor devices rather than electron tubes. But in the present context, it’s been used to distinguish solid-state electronics from electromechanical devices. Solid state drives also enjoy greater stability over their disk counterparts because it has no moving parts. With that solid-state drives are less fragile than hard disks and are also silent (unless a cooling fan is used); as there are no mechanical delays, they usually enjoy low access time and latency. For the first time Solid-state drive (SSD) technology has been marketed to the military and niche industrial markets in the mid-1990s.
Most all electronics that we have today are made up of semiconductors and chips. In case of a SSD, it refers to the fact that the primary storage medium is through semiconductors rather than a magnetic media such as a hard drive.
The look of SSD is no different than a traditional hard drive. This design enables the SSD drive to put in a notebook or desktop computer in place of a hard drive. To do this, it needs to have the standard dimension as a 1.8, 2.5 or 3.5-inch hard drive. It also will use either the ATA or SATA drive interfaces so that there is a compatible interface.
Actually this type of storage already exists in the form of flash memory drives that plug into the USB port. And for matter of fact solid state drives and USB flash drives both use the same type of non-volatile memory chips that retain their information even when they have no power. The difference comes in the form factor and capacity of the drives. While a flash drive is designed to be external to the computer system, an SSD is designed to reside inside the computer in place of a more traditional hard drive.
An SSD is commonly composed of DRAM volatile memory or primarily NAND flash non-volatile memory. Most SSD manufacturers use non-volatile flash memory to create more rugged and compact devices for the consumer market. These flash memory-based SSDs, also known as flash drives and don’t require batteries. They are often packaged in standard disk drive form factors (1.8-, 2.5-, and 3.5-inch). One more advantage of it is, non-volatility allows flash SSDs to retain memory even during sudden power outages, ensuring data persistence. But compare to DRAM SSDs, Flash memory SSDs are slower and some designs are even slower than traditional HDDs on large files, but flash SSDs have no moving parts and thus seek times and other delays inherent in conventional electro-mechanical disks are negligible.
SSDs based on volatile memory such as DRAM have ultrafast data access, generally less than 10 microseconds, and are used primarily to accelerate applications that would otherwise be held back by the latency of Flash SSDs or traditional HDDs. DRAM-based SSDs usually incorporate either an internal battery or an external AC/DC adapter and backup storage systems to ensure data persistence while no power is being supplied to the drive from external sources. If power is lost, the battery provides power while all information is copied from random access memory (RAM) to back-up storage. When the power is restored, the information is copied back to the RAM from the back-up storage, and the SSD resumes normal operation. (Similar to the hibernate function used in modern operating systems.) These types of SSD are usually fitted with the same type of DRAM modules used in regular PCs and servers, allowing them to be swapped out and replaced with larger modules.
Flash-based Solid-state drives are very functional and can be used to create network appliances from general-purpose PC hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware. NAND Flash based SSDs offer a potential power saving; however, the typical pattern of usage of normal operations result in cache misses in the NAND Flash as well leading to continued spin of the drive platter or much longer latency if the drive needed to spin up. These devices would be slightly more energy efficient but could not prove to be any better in performance.
On the other hand, Flash-memory drives have limited lifetimes and will often wear out after 1,000,000 to 2,000,000 write cycles (1,000 to 10,000 per cell) for MLC, and up to 5,000,000 write cycles (100,000 per cell) for SLC. Special file systems or firmware designs can mitigate this problem by spreading writes over the entire device, called wear leveling. Other issue of concern is security implications. For example, encryption of existing unencrypted data on flash-based SSDs cannot be performed securely due to the fact that wear leveling causes new encrypted drive sectors to be written to a physical location different from their original location—data remains unencrypted in the original physical location. Apart from these disadvantages capacity of SSDs is currently lower than that of hard drives. However, flash SSD capacity is predicted to increase rapidly, with drives of 1 TB already released for enterprise and industrial applications. The asymmetric read vs. write performance can cause problems with certain functions where the read and write operations are expected to be completed in a similar timeframe. SSDs currently have a much slower write performance compared to their read performance.
As a result of wear leveling and write combining, the performance of SSDs degrades with use. DRAM-based SSDs (but not flash-based SSDs) require more power than hard disks, when operating; they still use power when the computer is turned off, while hard disks do not.
To overshadow these disadvantages, Solid state drives have several advantages over the magnetic hard drives. The majority of this comes from the fact that the drive does not have any moving parts. While a traditional drive has drive motors to spin up the magnetic platters and the drive heads, all the storage on a solid state drive is handled by flash memory chips. This provides distinct advantages like Less Power Usage, Faster Data Access, and Higher Reliability. Solid state drives consume very less power in portable computers. Because there is no power draw for the motors, the drive uses far less energy than the regular hard drive. Another thing is SSD has faster data access, since the drive doesn't have to spin up the drive platter or move drive heads, the data can be read from the drive near instantly. Reliability is also a key factor for portable drives. Hard drive platters are very fragile and sensitive materials. Even small jarring movements from an impact can cause the drive to be completely unreadable. Since the SSD stores all its data in memory chips, there are fewer moving parts to be damaged in any sort of impact.
As with most computer technologies, the primary limiting factor of using the solid state drives in notebook and desktop computers is cost. These drives have actually been available for some time now, but the cost of the drives is roughly the same as the entire notebook they could be installed into.
The other problem affecting the adoption of the solid state drives is capacity. Current hard drive technology can allow for over 200GB of data in a small 2.5-inch notebook hard drive. Most SSD drives announced at the 2007 CES show are of the 64GB capacity. This means that not only are the drives much more expensive than a traditional hard drive, they only hold a fraction of the data.
All of this is set to change soon though. Several companies that specialize in flash memory have announced upcoming products that look to push the capacities of the solid state drives to be closer to that of a normal hard drive but at even lower prices than the current SSDs. This will have a huge impact for notebook data storage. SSD is a rapidly developing technology and Performance of flash SSDs are difficult to benchmark. And surely in the coming years it is going to extend its reach.

King Bluetooth: The name behind the Bluetooth technology



It’s a short-range communications technology that replaces the cables connecting portable or fixed devices while maintaining high levels of security. The key features of Bluetooth technology are robustness, low power, and low cost. The Bluetooth Specification defines a uniform structure for a wide range of devices to connect and communicate with each other

Technological development has reached to a point where devices which are still connected via cables or wires are outdated. And this wireless technology has given a new term called “Bluetooth”. This term is not new to most of us, rather we often use this technology in our day-today life to transfer information and files. This technology is a short-range wireless radio technology that allows electronic devices to connect to one another. The term Bluetooth has a small story behind it that justify its not so usual name that is the developers of this wireless technology first used the name "Bluetooth" as a code name, but as time past, the name stuck. The word "Bluetooth" is actually taken from the 10th century Danish King Harald Bluetooth. King Bluetooth had been influential in uniting Scandinavian Europe during an era when the region was torn apart by wars and feuding clans. Bluetooth technology was first developed in Scandinavia and able to unite differing industries such as the cell phone, computing, and automotive markets. Bluetooth wireless technology simplifies and combines multiple forms of wireless communication into a single, secure, low-power, low-cost, globally available radio frequency.
Bluetooth technology make connections just like cables connect a computer to a keyboard, mouse, or printer, or how a wire connects an MP3 player to headphones, but does it without the cables and wires. With Bluetooth there is no more worrying about which cable goes where, while getting tangled in the mess.
Bluetooth is a packet-based protocol with a master-slave structure. Connections between Bluetooth enabled electronic devices allow these devices to communicate wirelessly through short-range, ad hoc networks known as piconets. Each device in a piconet can also simultaneously communicate with up to seven other devices within that single piconet and each device can also belong to several piconets simultaneously. This means the ways in which you can connect your Bluetooth devices is almost limitless.
Bluetooth actually is one of the secure ways to connect and exchange information between devices such as faxes, mobile phones, telephones, laptops, personal computers, printers, Global Positioning System (GPS) receivers, digital cameras, and video game consoles.
Bluetooth uses a radio technology called frequency-hopping spread spectrum, which chops up the data being sent and transmits chunks of it on up to 79 bands of 1 MHz width in the range 2402-2480 MHz. This is in the globally unlicensed Industrial, Scientific and Medical (ISM) 2.4 GHz short-range radio frequency band.
Bluetooth technology’s adaptive frequency hopping (AFH) capability was designed to reduce interference between wireless technologies sharing the 2.4 GHz spectrum. AFH works within the spectrum to take advantage of the available frequency. This is done by the technology detecting other devices in the spectrum and avoiding the frequencies they are using. This adaptive hopping among 79 frequencies at 1 MHz intervals gives a high degree of interference immunity and also allows for more efficient transmission within the spectrum.
Bluetooth range may vary depending on class of radio used in an implementation like, Class 3 radios (up to 1 meter or 3 feet), Class 2 radios, most commonly found in mobile devices (10 meters or 33 feet) and Class 1 radios, used primarily in industrial use cases (100 meters or 300 feet).
Wireless Nature
There are tons of benefits of using wireless devices. In addition to improving safety as a result of eliminating the clutter of wires and associated hazardous connections, wireless technology also offers many convenient advantages. For example, when you are traveling with your laptop, PDA, MP3 player and other devices, you no longer have to worry about bringing along all of your connecting cables.
Inexpensive
Bluetooth technology is pocket friendly for companies to implement, which results in lower over-all manufacturing Costs and the ultimate benefit goes to the consumers. The end result: Bluetooth devices are relatively inexpensive.
Automatic
The usage is very simple and one doest have to harass them in order to get it connected. When two or more Bluetooth devices enter a range (Up to 30 feet) of one another, they automatically begin to communicate without you having to do anything. Once the communicating begins, Bluetooth devices will setup Personal Area Networks or Piconets.
Standardized Protocol
Since Bluetooth is a standardized wireless specification, a high level of compatibility among devices is guaranteed. The Bluetooth specification uses and defines various profiles. Every Bluetooth profile is specific to a particular function. For instance, when a Bluetooth enabled cell phone and a Bluetooth headset (Both with the same profile) are communicating with one another, both will understand each other without the user having to do anything, even if the devices are of different models/makes.
Low Interference
Bluetooth devices avoid interference with other wireless devices by Using a technique known as spread-spectrum frequency hopping, and Using low power wireless signals.
Low Energy Consumption
Bluetooth uses low power signals. As a result, the technology requires little energy and will therefore use less battery or electrical power.
Share Voice and Data
The Bluetooth standard allows compatible devices to share both voice and data communications. For example, a Bluetooth enabled cell phone is capable of sharing voice communications with a compatible Bluetooth headset; nevertheless, the same cell phone may also be capable of establishing a GPRS connection to the Internet. Then, using Bluetooth, the phone can connect to a laptop. The result: The laptop is capable of surfing the web or sending and receiving email.
Control
Unless a device is already paired to your device, you have the option to accept or reject the connection and file transfer. This prevents unnecessary or infected files from unknown users from transferring to your device.
Instant Personal Area Network (PAN)
Up to seven compatible Bluetooth devices can connect to one another within proximity of up to 30 feet, forming a PAN or piconet. Multiple piconets can be automatically setup for a single room.
Upgradeable
The Bluetooth standard is upgradeable. A development group, Bluetooth Special Interest Group (SIG) is responsible to offers several new advantages and is backward compatible with the older versions.
Bluetooth has several positive aspects and it’s difficult to find the downside of it, but there are still sum areas that need the attention. Like other communication technologies, Bluetooth also facing the issue of privacy and identity theft. But these issues are easily combatable, and various measures are already in place to provide for the secure use of Bluetooth technology.
Bluetooth has a shortcoming when it comes to file sharing speed. Compared to the up to 4.0MBps rate transfer of infrared, Bluetooth can only reach up to 1.0MBps, meaning that it transfers files slowly. In transferring or sharing larger files at a closer distance, other wireless technology like infrared is better.
With features like handy, portable, sophisticated, easy to handle and connected through some wireless mechanism, Bluetooth can be considered as a future technology that is going to sustain in the market for long.

Friday, June 4, 2010

Green touch to the technology


Green computing is also usually referred to as Green IT. The idea is to have least human impact on the environment. Apart from this, it aims to achieve environmental sustainability through environmentally responsible use of computers and their resources.
21st century is the ear of computers, gadgets, technologies and these are fuming the energy issues. As the topics like global warming, climate change, carbon emission is getting hotter; it’s the time to “go green” not in our regular life but also in technology.

Green computing or green IT, refers to environmentally sustainable computing or IT, efficient use of resources in computing. This term generally relates to the use of computing resources in conjunction with minimizing environmental impact, maximizing economic viability and ensuring social duties. Most of us think computers are nonpolluting and consume very little energy but this is a wrong notion. It is estimated that out of $250 billion per year spent on powering computers worldwide only about 15% of that power is spent computing- the rest is wasted idling. Thus, energy saved on computer hardware and computing will equate tonnes of carbon emissions saved per year. Taking into consideration the popular use of information technology industry, it has to lead a revolution of sorts by turning green in a manner no industry has ever done before.

Green It is "the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems—such as monitors, printers, storage devices, and networking and communications systems—efficiently and effectively with minimal or no impact on the environment. It includes the dimensions of environmental sustainability, the economics of energy efficiency, and the total cost of ownership, which includes the cost of disposal and recycling."

Opportunities lie in green technology like never before in history and organizations are seeing it as a way to create new profit centres while trying to help the environmental cause. The plan towards green IT should include new electronic products and services with optimum efficiency and all possible options towards energy savings. Like recycling computing equipment can keep harmful materials such as lead, mercury, and hexavalent chromium out of landfills, and can also replace equipment that otherwise would need to be manufactured, saving further energy and emissions. power supply are helping fix this by running at 80% efficiency or better. Power management soft-wares also help the computers to sleep or hibernate when not in use. On the far horizon, reversible computing (which also includes quantum computing) promises to reduce power consumption by a factor of several thousand, but such systems are still very much in the laboratories. The best way to recycle a computer, however, is to keep it and upgrade it. Further, it is important to design computers which can be powered with low power obtained from non-conventional energy sources like solar energy, pedaling a bike, turning a hand-crank etc.

Modern IT systems rely upon a complicated mix of networks and hardware; as such, a green computing initiative must cover all of these areas as well. There are considerable economic motivations for companies to take control of their own power consumption; of the power management tools available, one of the most powerful may still be simple, plain, common sense.
Product longevity
PC manufacturing process accounts for 70 % of the natural resources used in the life cycle of a PC. "Look for product longevity, including upgradeability and modularity." For instance, manufacturing a new PC makes a far bigger ecological footprint than manufacturing a new RAM module to upgrade an existing one, a common upgrade that saves the user having to purchase a new computer.
Resource allocation
Algorithms can also be used to route data to data centers where electricity is less expensive. This approach does not actually reduce the amount of energy being used; it only reduces the cost to the company using it. However, a similar strategy could be used to direct traffic to rely on energy that is produced in a more environmentally friendly or efficient way. A similar approach has also been used to cut energy usage by routing traffic away from data centers experiencing warm weather; this allows computers to be shut down to avoid using air conditioning.
Virtualization
Computer virtualization refers to the abstraction of computer resources, such as the process of running two or more logical computer systems on one set of physical hardware
Terminal servers
Terminal servers have also been used in green computing. When using the system, users at a terminal connect to a central server; all of the actual computing is done on the server, but the end user experiences the operating system on the terminal. These can be combined with thin clients, who use up to 1/8 the amount of energy of a normal workstation, resulting in a decrease of energy costs and consumption.
Power management
The Advanced Configuration and Power Interface (ACPI), an open industry standard, allows an operating system to directly control the power-saving aspects of its underlying hardware. This allows a system to automatically turn off components such as monitors and hard drives after set periods of inactivity. In addition, a system may hibernate, where most components (including the CPU and the system RAM) are turned off.
With this green vision, the industry has been focusing on power efficiency throughout the design and manufacturing process of its products using a range of clean-computing strategies, and the industry is striving to educate markets on the benefits of green computing for the sake of the environment, as well as productivity and overall user experience. And few initiatives will change the overall structure:
Carbon-free computing
The idea is to reduce the "carbon footprint" of users — the amount of greenhouse gases produced, measured in units of carbon dioxide (CO2). PC products certified carbon free, taking responsibility for the amounts of CO2 they emit. The company’s use of silicon-on-insulator (SOI) technology in its manufacturing, and strained silicon capping films on transistors (known as “dual stress liner” technology), have contributed to reduced power consumption in its products.
Solar Computing
Solar cells fit power-efficient silicon, platform, and system technologies and enable to develop fully solar-powered devices that are nonpolluting, silent, and highly reliable. Solar cells require very little maintenance throughout their lifetime, and once initial installation costs are covered, they provide energy at virtually no cost.
Energy-efficient computing
Green-computing initiative is the development of energy-efficient platforms for low-power, small-form-factor (SFF) computing devices. In 2005, VIA company introduced the VIA C7-M and VIA C7 processors that have a maximum power consumption of 20W at 2.0GHz and an average power consumption of 1W. These energy-efficient processors produce over four times less carbon during their operation and can be efficiently embedded in solar-powered devices. Intel, the world's largest semiconductor maker, revealed eco-friendly products at a recent conference in London. The company uses virtualization software, a technique that enables Intel to combine several physical systems into a virtual machine that runs on a single, powerful base system, thus significantly reducing power consumption.
Power supply
Desktop computer power supplies (PSUs) are generally 70–75% efficient, dissipating the remaining energy as heat. An industry initiative called 80 PLUS certifies PSUs that are at least 80% efficient; typically these models are drop-in replacements for older, less efficient PSUs of the same form factor. As of July 20, 2007, all new Energy Star 4.0-certified desktop PSUs must be at least 80% efficient.
Storage
Smaller form factor (e.g. 2.5 inch) hard disk drives often consume less power per gigabyte than physically larger drives.


Green use — reducing the energy consumption of computers and other information systems as well as using them in an environmentally sound manner
Green disposal — refurbishing and reusing old computers and properly recycling unwanted computers and other electronic equipment
Green design — designing energy-efficient and environmentally sound components, computers, servers, cooling equipment, and data centers
Green manufacturing — manufacturing electronic components, computers, and other associated subsystems with minimal impact on the environment