One technology that IT managers should start to dig into, if you haven't already, is power over Ethernet. As devices stray further and further from power outlet locations, this emerging technology is gaining traction throughout the enterprise.
IP phones, wireless access points, security cameras, card scanners and other devices draw power from switches through standard Ethernet cabling, thus the name "power over Ethernet."
Today, the most power a device can pull through that cable is just over 15W. But the IEEE is working on a standard that would boost that to around 50W. This will open it up to even more applications.
As attractive as this all seems, power over Ethernet causes challenges for wiring closet and data centre operations -- a part of the enterprise that doesn't need any more power problems. There are many issues to consider, including the heating and cooling of the switches, backup power supplies, and the actual load each switch can handle.
In fact, AFCOM, an association for data centre professionals, recently disclosed its top five predictions for the future of the data centre industry and among those was one regarding power. "Over the next five years, power failures and limits on power availability will halt data centre operations at more than 90 percent of all companies," AFCOM predicts.
I recently spoke with Rick Sawyer, director of data centre technology for APC and a board member of AFCOM'S Data Centre Institute, about the state of power over Ethernet and how it ties into overall power concerns.
How do you perceive the maturity level for power over Ethernet today?
It is really in the infancy stage and limited to low-power applications.
What are some considerations when rolling out this technology?
Since the capacity to deliver power over the Internet is very limited (in the milliamp ranges), the primary consideration is to know what the application electrical load is, and how all of the loads aggregate on the system, and if the system is capable of delivering that degree of power.
Are there requirements that need to be met depending on the application?
Obviously the condition of the power is an issue -- filtering unwanted power characteristics whether from the source, the conducting network or from the loads themselves. Filtering is potentially a huge issue -- how do you protect the network from induced power problems from all of the attached loads. For instance, what if there is a lightening strike as some point on the network that transmits an impulse through the power system to the server sources? You could have the network fail from an externally connected device, potentially ruining millions of dollars of attached hardware.
How do you prevent this from happening?
Good engineering design requires knowledgeable designers, people who not only know the architecture, but also the risks and risk mitigation strategies and the hardware necessary to implement protective systems.
What environments are best suited to use power over Ethernet?
Small, low-power, local networks. Transmitting power over distances requires larger conductors or higher voltages. Sending 5v DC over a couple of hundred feet of wire even becomes problematic -- there are line losses when conductors are more than 50 feet long that mean the internal resistance of the wire is causing a voltage drop at the end of the line.
The alternative is to transmit using AC power, and to get sufficient power the frequency has to increase, which leads to even more technical issues. For instance, consider power transmission characteristics where a higher frequency causes the current (which actually does the electrical "work") to ride on the outside or "skin" of the conductor, which in turn increases inductance and crosstalk with signals that carry the data being transmitted. Very problematic.
Are there any other drawbacks to power over Ethernet?
Limited power capacity, potential for network disruption, potential for interference with data signal transmission, power sourcing equipment at the network hubs, power conditioning, liability for power-related issues in connected equipment.
Do you have advice for rolling out power over Ethernet to voice over IP, servers, wireless?
What about planning for capacity and future capacity?
The most fundamental planning issue is to realise the power requirements at the source, and design for that: power conditioning, filtering, testing and being able to isolate problems easily.
What are some tips/recommendations in this area?
View the business case for implementation before jumping into a new technology, whatever it is: know the applications, know the hardware, know the reasons for selecting a particular approach from all of the options you have. You should also know the risks, have a risk management plan, and make your analysis based on all factors, including facility, power capacity, third-party service provider capability, cost and ROI.
Too many times people focus on one aspect or benefit of a solution -- VoIP is an example. Tremendous savings in cost of telephony, but a lot of people overlooked the fact that if the power in the telephone closet went down, the phones also went down (then how do you call people?). Telecom closets then required redesign, real-time management, services and upgrades to get to the same reliability the user experienced before VoIP. A lot of these factors were not in the business case to implement VoIP, but the factors were real. The reliability problem was transferred from the telephone company to the end user, at what cost?
What do you predict for this market?
The real issue I see with power over Ethernet is: What about the looming support for applications over wireless systems, where there is no wire to transmit power through? Sometimes a promise of a new technology is eclipsed by even newer technologies, so the "what if" question gets more important in making a sound business decision. In this case, 'What if Ethernet applications are over fibre-optic cables or through wireless channels, what is my investment in power over the Ethernet buying me?'