What You Need To Know About Facebook’s Open Source Data Center Hardware



With so many contributions to the world of technology, companies of all sizes had to experiment to find the most efficient ways of achieving their goals. Open Source Software (OSS) allows people to look at what others have done to see if it fits their needs without being subjected to huge fees. While it’s time consuming, it’s also led to some major victories and innovations. Facebook has been instrumental in extending this philosophy to hardware when they started their Open Computer Project (OCP) in 2011. We’ll tell you what you need to know about how they’ve helped themselves and others achieve their storage and computing needs with their hardware.

OCP has given people a platform and opportunity to start building new and different infrastructure for better quality and faster processing. Google, Microsoft and Apple have all jumped on board to start contributing to the cause. As companies grow larger, it takes an immense amount of energy to store all the information safely and with as little waste as possible. There is still plenty of room for improvement to be made as the need
grows. OCP is meant to put the focus on the needs of the operators of the actual data center as opposed to the vendors who often don’t have the professional experience to understand the technical requirements of their information needs.

Unsurprisingly Facebook has been the major contributor to the project


They have shared new ways to design servers and network hardware as well as details about electrical infrastructure. While this project has had more emphasis on hardware and the physical properties of technology, they’ve also contribute software ideas as well.

In 2011, Facebook released its document describing the electrical and mechanical specifications of their first data center in Oregon along with information about how they grouped their servers and how they used battery cabinets as backups in case the power supply died. They also released information about the Freedom server, meant to ease installation requirements and the Spitfire Server which was a variation of a standard AMD-chip motherboard. Their goals were often to minimize excess features that sucked up energy, like their Windmill and Watermark servers which were meant to conserve power and cost in every possible way. They introduced Mezzanine cards in 2012 to expand functionality for motherboards and new versions of the motherboards themselves meant to last through several generations of processors. Facebook made their rack design public in 2013 to make servicing requirements easier and they also released high-volume storage solutions. They improved upon original technology and then released how they made the improvements. In 2014, they came out with cold storage for information that wasn’t in high demand on Facebook, and memory solutions for their servers. They also open sourced the Honey Badger module which could create more storage. Just last year, they came out with design networking hardware as well as core switches and improved server platforms.

As you can see, Facebook has made it their mission to be forthcoming about their methods so other companies may benefit. They continue to be dedicated to this project, and encourage other tech companies to jump on board.


Original: SociablWeb
Image Source

No comments:

Related Posts Plugin for WordPress, Blogger...