• 12 May 2006
  • News
  • By

Tech Trends 2006

Tech Trends 2006

Tech Trends 2006

Last year saw lots of technologies that will carve a niche for themselves in the near future. PC Quest analyzes the technologies one should watch out for next year.

What you see today is a re- sult of what happened yes- terday, and what you'll see tomorrow will be a result of what's happening today. If you want to predict the future with at least some degree of accuracy, you have to be aware of what happened in the past and present. We make our set of predictions on what to expect in 2006 based on what's happened in 2005. We look at broad areas and tell you about the key technologies in those. At the end of the day, what good is technology if you can't make use of it? So we predict which technologies will become hot in 2006 for you to choose from, and which ones to keep watch on for the coming years. We've also gone beyond technologies and talked about interesting products derived from them, as well as the standards that are being worked on. Products are the proof of success for any technology and standards tell us how much faith we have in it to make further investments.

Every technology has a lifecycle. In the beginning, there's a lot of talk about it, and gradually if it gains acceptance and everything goes right, it starts being adopted. It flourishes until the time something new and better comes out, after which it starts losing popularity and eventually fades away into oblivion. We apply this curve on each area that we have covered. The areas include wireless, security, storage, data centers, and basic hardware.

The curve itself is broken into seven parts: Buzz, Long Term, Very Hot, Hot, Steady, Lukewarm, and Down. Buzz stands for technologies that are really being talked about, but there's no concrete action happening on them. It may or may not happen. Long Term technologies are those that will happen in 2-3 years. Very Hot technologies are the talk of the town, with some early birds implementing them. Hot technologies are those that have gained critical mass, so you should implement them. Steady technologies have already been implemented by lots of people, and you should have already done it. Lukewarm technologies are old news, and loosing visibility. Likewise, technologies that are down are ones that are loosing ground as well as visibility to newer trends and technologies.

We hope that the pages to follow will help you plan your investments better.

By Anil Chopra, Krishna Kumar, Shekhar Govindrajan, Sujay Sarma and Vinod Unny of PCQuest, India's leading IT magazine


Expect lots more wireless products, better throughputs, higher range, and tons of applications in 2006. That pretty much sums up the excitement in this domain.

Year 2005 was that of innovation in wireless technologies, of standards wars and ratifications and of new developments. Till date, one could nicely categorize all wireless technologies. There was 802.11a/b/g for Wireless LANs, Bluetooth for wireless PANs, GSM/GPRS/CDMA for wireless WANs, RFIDs for supply chain management, etc. All these technologies have had high degrees of penetration in their respective markets. But in the recent past, several other wireless technologies have been under development, which promise to blur out these fine categories and compete head on with existing technologies.

Blast from the past: Many key trends have boosted wireless applications. One was the steep drop in prices of wireless access points, and the onslaught of a wide range of wireless routers from different vendors for SOHO. As a result, anybody could quickly set up a wireless network. Today, you can buy a wireless router for as little as Rs 3,000, and most notebooks anyways come with built-in WiFi capabilities, making it easy to set up a wireless network. We also saw many innovative wireless devices hit the market, like the ASUS's wireless NAS box and the Netgear's wireless travel router. The NAS box had a hard drive and embedded software that allowed the box to act as a file server. You could create users and give them access to storage space on the hard drive. Users could then access their allotted storage space over WiFi. The travel router was a tiny wireless router that could be put in your notebook. You could even plug it into a DSL line and have wireless Internet access wherever you want. 802.11g became the de-facto WiFi standard in India , while 802.11b phased out. In Wireless WANs, the number of GSM subscribers grew to a whopping 53 million in India, and in CDMA, it grew to 14.35 million. Bluetooth grew in popularity, with a slew of applications and products. For instance, one of the world's largest wireless healthcare networks was implemented using Bluetooth in Copenhagen in October.

Standards wars: The 802.11i security standard for instance went into full effect. Being an issue in WiFi networks, many security techniques have been worked out, be it WEP or MAC-based filtering. The one that has finally become popular is WPA or WiFi Protected Access. It was ratified in June 2004, and products based on this standard streamed in at full flow. This year, the next version to this standard, WPA 2 also came into being, and Win XP started supporting it in May. The WPA implements most of the 802.11i standard specs as proposed by IEEE. As security has always been a concern on WiFi networks, this standard has helped alleviate some of the issues and will go a long way in helping more organizations implement WiFi networks over the coming year.

Coming to standards wars, one has been for higher-speed wireless access, which would take WiFi throughput beyond the current 54 Mbps to a theoretical maximum of 540 Mbps. This will be achieved using MIMO (Multiple In Multiple Out) and based on IEEE 802.11n standard, and two groups have been fighting it out with competing proposals since 2004, TGn Sync and WWiSE. This year, they decided to merge their proposals and send it for approval. The results for the same will be declared by mid 2006. So hopefully, we should see WLAN speeds soar towards the second half of 2006 with mutual agreement between both factions. This doesn't, however, mean that you should put on hold your plans of implementing WiFi. Study your WiFi needs carefully and if the current technologies are able to meet them, go ahead and use them. In fact, even if you must have high-speed wireless today, some pre-802.11n standard based products are already available (September 2005 WiFi Access Point shootout).

Rising competition and choices:
Like we said, new wireless technologies are being developed. One is WiMax-currently aimed as a high-speed wireless broadband technology that can work within a radius of 3–10 kms. It's expected to replace DSL and cable as the last mile solution. Even mobile users in a city can use it within a radius of 3 kms and get up to 15 Mbps bandwidth. It's expected to be incorporated in notebooks and PDAs next year. Besides DSL and cable, it could also pull users of GPRS and CDMA, who don't travel much. Since the technology is more efficient than 802.11, who knows it could also end up replacing that. But don't worry about it, as that's not bound to happen next year at least.

On the Wireless PANs front, Bluetooth is getting ready for its next version. Though the current Enhanced Data Rate Version has seen tremendous success in terms of support for devices, its bandwidth is still limited to only 2.1 Mbps. That's why the Bluetooth SIG (Special Interest Group) is now in talks with UltraWide Band manufacturers to use their technology in Bluetooth. This would give a significant boost to the throughput. At the same time, another technology called Wireless USB based on UWB is also underway, which is supposed replace the current 'wired' USB standard, offering a similar 480 Mbps throughput.



2006 will see more action in information security and identity management solutions

Today, a security threat can enter from anywhere, be it through e-mail, a Web browser, or even an infected notebook pluging into your network. Besides social engineering, we also saw lots of phishing and pharming scams, two techniques aimed at fishing out a user's personal information. So security has definitely been on the top of everyone's mind. And as more enterprise businesses moves online, they need better security measures. This saw a rise in SSL based VPN solutions, and even a rise in integrated security appliances applications.

Security appliances: A lot of vendors are entering the market with security appliances and integrated appliances that have firewalls, anti-spam, antivirus, and even end-to-end encryption. Also included in these appliances is the ability to demarcate DMZs and support VPN over IPSec or PPTP with either 3DES or AES (256-bit) encryption. The IDS features on these boxes range from detecting various kinds of known attacks including flooding, IP spoofing, DoS, etc. Such a box can also react in case of emergencies by dropping packets from the attacker's address. Some appliances even have network anti-virus capability. These need to be geared to meet enterprise-class performance requirements for availability and speed. The iForce IDS appliance from Symantec for instance is supposed to monitor networks at speeds of upto 2 Gbps on some models.

Vulnerability stats: The number of vulnerabilities reported in 2005 is up about 500 incidents from 2004 and stands at 4,268. The most frequent ports under attack were reported to be FTP, SSH, DNS, HTTP/HTTPS, SunRPC, NetBIOS and SQL Server. Thankfully, most of these could be mitigated by upgrading to newer versions of software or changing port numbers. CERT sees the number of Trojans and self-propagating worms as an area of concern.

Social engineering & ID theft: Two main technologies leading ID management are devices like SecurID that have one-time keys that you use at designated terminals or screens, and digital certificates.

Everything's cached: Nowadays, anything that's exposed to the Web has mostly likely been stored away forever in some corner of the Internet. Internet archival systems like The Wayback Machine and content replication systems that provide mirroring services are but the tip of the ice-berg.

Disk space-full: Scientists postulate that about 23 percent of the Universe is composed of dark matter. Stuff we cannot see, but their presence has direct consequences on our Universe. Much the same is true for files and programs on our hard disk. In order for so many things to happen when we just click onto a Web page, our computer downloads and runs so many files and programs-large and small. And all of it is on our computer's hard disk. Those that run may never, in fact, leave our computer completely, no matter what tools we use. This in fact, is the single biggest challenge for system administrators world-wide.

Cracking for the public: Cracking passwords, it seems, has become commonly accessible and fashionable to do. A site has sprung up powered by Zhu Shuanglei's 'Rainbow Crack' engine (an open source download) that promises to place online about 500 GB of rainbow tables (pre-computed password hashes) readily usable by anyone who pays them for an account. RainbowCrack-Online.com claims to be for cracking what Google is for search. A lusty claim sure, but imagine how much more you need to protect your systems once such a database is at the back and call of every cracker around the world.

Basic Hardware

Dual core CPUs, virtualization, SLI and CrossFire are some of the technologies that will make an impact in 2006.

Basic hardware is perhaps the toughest market for any vendor to be in, largely because the competition is stiff and margins are low. The moment someone introduces something new in the market, it's soon followed by hundreds of "me toos". So a vendor can only earn a premium on the product till others catch up. What does that have to do with technology? It's nothing else but market forces, right? Wrong. In order for a vendor to be able to sell its product in the market, it has to innovate.

The trend now is to build highly feature-rich motherboards, with lots of functions onboard, whether it's 7-channel surround sound, Gigabit Ethernet, WiFi, RAID or FireWire ports. While this trend is good for consumers, it's also forcing the component manufacturers to innovate further so their products don't become redundant. So even though you can get 7-channels onboard audio, you'll find sound cards with even more compelling features, such as the Creative X-Fi. Speaking of compelling, lots of compelling technologies were introduced this year with equally compelling applications. Let's get into them and see their impact.

Key innovations and their impact: We'll start with graphics. The focus has been to have multi-GPU solutions, i.e., putting multiple graphics cards in the same machine and making them work together. This started with the introduction of SLI, or Scalable Link Interface technology by nVidia. With SLI, you could take two nVidia graphics cards and place them both in a SLI-ready motherboard. Both graphics cards would work together, thereby literally doubling the performance. Soon afterwards, ATI introduced its CrossFire multi-GPU technology. The big deal about this one was that people who already had certain ATI Radeon cards could buy another CrossFire Edition card and use the two together. This saved them the cost of buying another card. Since both SLI and CrossFire are relatively new, only gaming freaks with very deep pockets can afford them. But going by the trend in the hardware segment, that shouldn't be the case for too long. Next year, they'll become common and something better will emerge. What will it be next? Multi-core GPUs? You never know!

Speaking of multi-core, Intel and AMD both introducing their dual-core processors this year, called Pentium D and X2 respectively. Multi-core technology as such is nothing new. It's been there inside RISC-based processors, such as those from IBM and Sun, which power high-end servers. But with Intel and AMD also jumping into the multi-core bandwagon, the technology has now reached desktops, which is worth noticing. By next year, one should see dual-core CPUs completely taking over from single core. The advantage of course is much better performance, especially when doing multi-tasking. Moreover, both Intel and AMD have also introduced their multi-core technology into their server CPUs, Namely the Xeon and Opteron, and the first servers based on this technology have already started shipping (see the Acer Altos server review in this month's shootout). Like desktops, most server manufacturers are likely to replace their dual-CPU offerings with dual-core ones by next year.

We're not through with CPUs yet. The mobile processors for notebooks are also expected to switch to dual core next year, resulting in much more powerful notebooks.

Many other technologies have entered the CPU this year besides dual-core, which will make the processors more powerful and yet less power hungry. The significant one to remember for the next year is virtualization. Processors with virtualization will allow multiple OSs to be run on the same machine in multiple partitions. So far this was possible only through specialized software such as VMWare and MS Virtual PC/Server. Virtualization technology as such has become quite popular in organizations these days. Support for it on hardware will give it a further boost. So a network administrator could now put multiple OSs on a single machine, with one for the user, another to do security checks, and a third to do inventory, without taking a toll on the performance.

As you can imagine, the clock speed battles in CPUs are long over. Now it's all about adding other features into the CPU, not only for higher performance, but better manageability, security and power savings.

Overall, we're seeing efforts from everyone in the hardware industry to improve the end-user experience through better products, which is a really good sign. The PC range for instance widened considerably this year. We saw several vendors intorduce PCs in the 10-12K range. Another range that was hot were MediaCenter based machines, which can act as a complete entertainment center at home. Servers are also following a similar pattern, and becoming more of a commodity than a special device. Today, you can go and pick up an entry-level server for less than 60K.

We finally realized while technology is definitely important, its application is even more important, and we'll see that focus increase in the coming years.

Storage Matters

Storage Matters

Storage has been the center of attraction for many years now and one of the reasons is that it's evolved with the times

Storage has been the center of at traction for many years now and one of the reasons is that it's evolved with times. In fact, storage is perhaps one of the few segments that has grabbed every possible opportunity and made use of it.

Storage and compliance: Compliance became a major concern for the US companies after HIPAA and Sarbanes Oxley acts were introduced. Even Indian companies catering to international clients had to worry about compliance to these two acts as well. The storage industry identified compliance as an opportunity and created a need for good storage solutions for effective data management. They were no longer selling dumb storage boxes, but intelligent information management solutions. With HIPAA, storage vendors introduced Information Lifecycle Management or ILM, which stored and managed data right from its creation to its deletion. With Sarbanes Oxley, ILM was reinforced, but besides that, e-mail archiving also picked up. So ILM is still hot, and organizations should consider doing it from the point of managing their information properly.

E-mail archiving: This is a key trend today, and not without good reason. Most official communication happens over e-mail today, which means it can be used as legal evidence should the need arise. It's, therefore, very important to back up all the important email. That may not sound like much at one glance, so let's do some calculations. Even if a single employee gets 1 GB of e-mail a month, you can imagine the storage requirements. If a company has 100 employees, then that's 100 GB of e-mail a month, meaning 1200 GB per year. You would need to store it for at least five years, which translates to 60,000 GB of e-mail! Where do you store all of it? You could back it up to tape as that would be more cost effective since tape is still cheaper than disk. To add further complexity in the matter, a lot of companies allow their employees to use e-mail for personal communication needs as well. If your company allows that, then you would have to take some tough decisions on whether you'd like to continue allowing it or change the policy. This space will see more action next year, so stay tuned and work out your plans.

Storage virtualization: There are three types of virtualization-servers, storage and network. Out of these, server virtualization has really picked up this year, with some pretty good solutions are available from IBM, VMWare, Microsoft, McAfee and several open source communities. Storage virtualization has been talked about for a long time, but is still an emerging technology. Very few large enterprises across the world have implemented it. However, it is picking up, and one should see more action on this next year.

Other trends: Another area of concern that's picking up is storage security. Drilling deeper, a lot of what you see today is happening because the cost of individual storage elements is coming down, and capacities increasing. Seagate forz instance, introduced its perpendicular recording technology this year, which will allow them to increase the capacity of their drives quite a bit. They launched their 2.5 inch Momentus drive for notebooks, with 160 GB capacity using this technology. That really takes care of all the storage needs of a mobile executive. Regular hard drive capacity has already touched 500 GB, which makes it easier to build multi-Terabyte NAS boxes.

Much more has happened in storage in 2005, but we're restricted in talking about it due to the space crunch.


Enterprise Mobility

The increase in usage of mobile devices in organizations raises more policy related issues than technology related ones

Mobility is de rigueur (required by the current fashion or custom; socially obligatory) for any self respecting tech setup these days. This piece will focus on the associated issues that both the user and the tech team is likely to face.

Let us start with the first of our two predictions. The usage of affordable mobile devices that can connect to the network, as well as to the Internet over multiple wireless protocols raises a number of issues, of security, of access and of connectivity.

The IT department is more likely to grapple with these issues of policy rather than with those of technology implementation. For example, should users be allowed to use their personal laptops at the workplace, instead of office machines? If yes, does the office get the same amount of control over the software running on those machines, as they have on the other machines?

If those machines (with valuable enterprise data) crash, is the organization responsible to recover it? Who will
pay for it? What about an outsider who walks in with a Blue tooth enabled device (most cell phones and almost all PDAs today are bluetooth enabled) that scans and connects to open Bluetooth connections in notebooks across the office?

An average IT department that does some monitoring and enforcing of controls on the infrastructure being used (but not too strictly) would be the one that is going to be the most heavily taxed by the addition of a plethora of mobile devices to
the network.

The answer like with any other tech infrastructure problem lies in getting your user policies in place first, and then working out the tech details. Many companies have already started locking down their employees' laptops ports, and installing other security software.

Another challenge that IT departments will face is in reformatting their application GUIs and Web pages (intranet applications included) to fit mobile devices, particularly cellphones and PDAs. With an ever increasing number of platforms (operating system/Software or browser supported / hardware) and screen sizes around, IT departments will be forced to limit the platforms they support to a few and, thus, may be forced to drive some level of standardization of mobile platforms for across the organization, as they strive to deliver richer interfaces to mobile devices.

The IT department will have to make some provisions for round the clock technology support availability (if they do not already have it in place). Mobility is here not only to stay but to fluorish over the years. This is just the beginning, so it's better to start planning out your strategy to manage it in the future.


Data Centers

Where do you host your data and how is it stored? A mix of environmental engineering, location choice, security practices and oodles of storage and processing power with lots of bandwidth-what's new?

Data centers have the following necessary capabilities: 24x7 availability, high-bandwidth, power and heat conditioning, good physical access security and sound location. These are meant to be locations that will and can store all of your information for as long a period of time as possible. Not all data-centers are purely storage, though. Data centers can host your processing power too. These can perform off-site computations like long range forecasts and so on, freeing up local resources for other everyday computational tasks.

Consolidation: Right from equipment and software vendors to solution providers and consultants, everyone is busy implementing or advising consolidation of resources. But how exactly do you consolidate data centers? The answer lies in determining the goals of your data centers and consolidating their needs and making better use of the available resources.

Also, part of those worries can be outsourced or co-located with someone else. For instance, the least deployment would be a data center, a DR site.

Possibly, the DR site could be a couple of co-located servers and some storage with a specialist. This way, consolidation no longer only means how much equipment you have, but also includes short and long term costs. It has become in effect, a consolidation of resources that include your budget. The simplest example of consolidation is of course moving from tower to racks and racks to blades.

Talking of blade servers, the action now is on Itanium, Itanium 2, Opteron and Dual Core blades. This follows the release of the 64-bit version Windows Server 2003. Vendors like SGI, IBM, HP and Fujitsu have announced or released such blades.

Better design: The past year has seen vendors going around shouting 'thermals'. When people switched from racks to blades, the need for cooling went up, because instead of just two servers in a 2U box, you now had 25 of them -50 of them if your blades were dual processor. And, these boxes need to be cooled side-on instead of vertically. Thus, the design of the racks and cabinets housing the blades has had to undergo a design change to accommodate the new cooling requirements.

Virtualization: Hardware and software virtualization plays a big part in resource utilization. And when you're talking about the costly irons in a data center, this is important. When a physical server is hardware partitioned, its resource utilization is better. While the past year has seen Intel talking about including hardware partitioning support in their new CPU chips, servers have had them for ages, on different platforms including the Power series from IBM. Add to this the reality of multi-core CPUs and 64-bit processing and you can imagine the raw power available on such servers.

Virtualization is also a key to consolidation, since it allows you to utilize what you have in so many different ways. For instance, if you had a single blade server (with 25 blades in it), you could add a virtualization layer to that server and make yourself a 25-CPU server.

What is key to the success of virtualization, is actually how the workload is managed. Unless workloads or resources can be dynamically shifted to suit conditions, the system will continue to face lopsided resource marshalling. For instance, the workload manager should shift more resources from a lesser-utilization partition to the one that's almost full up. HP's Adaptive Enterprise is one such approach. Similarly IBM has its Virtualization Engine that adds virtualization support to the i, p and x series servers. Sun has the N1 Grid.

However, all the action today in virtualization is happening with VMWare. Their updated line up of software packs more punch to the virtual machines they help create and host. For details, see the box on Virtualization. Competing software like Xen and Microsoft Virtual Server are trying to fight back, but in so far, their features leave them quite far behind. The issue of virtualization though is quite serious. Linux vendor Red Hat has already announced that its next release of Red Hat Linux (2006) will bundle native support for server virtualization! The specifics of what this would include are not available yet, although it is expected to give competition to full-fledged virtualization products from other vendors.

How will virtualization and consolidation affect your data centers? One emerging picture is that enterprises may no longer need costly full-fledged data center buildings, but can do with a single rack full of blades (that's about 250 blades in all) in a server room somewhere in a corner.

Blade PCs: In the future, the enterprise desktop may be a dumb terminal, the software running in a blade somewhere in a data center. This might come true if IBM's Virtualized Hosted Client Infrastructure succeeds. The average ratio is about 15 virtual PCs to a blade. The ecosystem of software that does this consists of-a VMWare GSX Server provides the VM hosting, a Citrix Presentation Server provides the interface to access the VM. Pricing is not currently available although the first products are expected to ship early 2006.



Leave a Reply Sign in

Notify me of follow-up comments via e-mail address

Post Comment

Survey Box


GST: Boon or Bane for Healthcare?

Send this article by email