<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Press Releases - Aeon Computing</title>
	<atom:link href="https://www.aeoncomputing.com/category/press-releases/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.aeoncomputing.com</link>
	<description>High-Performance Computing</description>
	<lastBuildDate>Tue, 22 Mar 2022 16:42:49 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.9.10</generator>
	<item>
		<title>The Accelerated Box of Flash: Accelerating Intensive Data Operations with Computational Storage</title>
		<link>https://www.aeoncomputing.com/the-accelerated-box-of-flash-accelerating-intensive-data-operations-with-computational-storage/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-accelerated-box-of-flash-accelerating-intensive-data-operations-with-computational-storage</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Sat, 19 Mar 2022 21:03:03 +0000</pubDate>
				<category><![CDATA[HPC]]></category>
		<category><![CDATA[Press Releases]]></category>
		<category><![CDATA[ABOF]]></category>
		<category><![CDATA[Flash]]></category>
		<category><![CDATA[OpenZFS]]></category>
		<category><![CDATA[ZFS]]></category>
		<guid isPermaLink="false">https://www.aeoncomputing.com/?p=3503</guid>

					<description><![CDATA[<p>The Accelerated Box of Flash: Accelerating Intensive Data Operations with Computational Storage</p>
<p>Radically new approach to storage acceleration aids data manipulation for research and discovery co-developed by Los Alamos National Labs, nVidia, Eideticom, Aeon Computing and SKHynix</p>
<p>The post <a href="https://www.aeoncomputing.com/the-accelerated-box-of-flash-accelerating-intensive-data-operations-with-computational-storage/">The Accelerated Box of Flash: Accelerating Intensive Data Operations with Computational Storage</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<h4 style="text-align: left;">For Immediate Release</h4>
<h2 style="text-align: center;">The Accelerated Box of Flash: Accelerating Intensive Data Operations with Computational Storage</h2>
<h3 style="text-align: center;"><em>Radically new approach to storage acceleration aids data manipulation for research and discovery</em></h3>
<p><strong>San Diego, March 18, 2022</strong></p>
<p>News Facts</p>
<ul>
<li>Los Alamos National Laboratory, nVidia (Mellanox), Eideticom Aeon Computing and SK Hynix co-developed the &#8220;Accelerated Box of Flash&#8221; or ABOF platform</li>
<li>The ABOF platform incorporates accelerator technology to offload performance critical storage functions from host systems.</li>
</ul>
<p>Data is a vital part of solving complicated scientific questions, in endeavors ranging from genomics, to climatology, to the analysis of nuclear reactions. However, an abundance of data is often only as good as the ability to efficiently store, access and manipulate that data. To facilitate discovery with big data problems, researchers at Los Alamos National Laboratory, in collaboration with industry partners, have developed an open storage system acceleration architecture for scientific data analysis, which can deliver 10 to 30 times the performance of current systems. The architecture enables offloading of intensive functions to an accelerator-enabled, programmable and network-attached storage appliance called an Accelerated Box of Flash or simply ABOF. ABOF systems are destined to be a key component of the Laboratory’s future HPC platforms.</p>
<p>“Scientific data and the data-driven scientific discovery techniques used to analyze that data are both growing rapidly,” said Dominic Manno, researcher with Los Alamos National Laboratory’s High Performance Computing division. “Performing the complex analysis to enable scientific discovery requires huge advances in the performance and efficiency of scientific data storage systems. The ABOF programmable appliance enables high-performance storage solutions to more easily leverage the rapid performance improvements of networks and storage devices, ultimately making more scientific discovery possible. Placing computation near storage minimizes data movement and improves the efficiency of both simulation and data-analysis pipelines.”</p>
<p>Scalable computing systems are adopting Data Processing Units (DPUs) placed directly on the data path to accelerate intensive functions between CPUs and storage devices; however, the ability to leverage DPUs within production-quality storage systems for use in complex HPC simulation and data-analysis systems has proven difficult. While DPUs have specialized computing capabilities that are tailored to data processing tasks, their integration into HPC systems has not fully realized available efficiencies.</p>
<p>The ABOF appliance is the product of hardware and storage system software co-design. It enables simpler use of NVIDIA BlueField-2 DPUs and other accelerators for offloading intensive operations from host CPUs without major storage system software modifications and allows users to leverage these offloads and the resulting speedups with no application changes. The current ABOF implementation accelerates three critical functional areas necessary to storage system function – compression, erasure coding and checksums – by applying specialized accelerators. Each of these functions represents time, expense and energy-use in storage systems. It utilizes BlueField-2 DPUs with 200Gb/s InfiniBand networking. The performance-critical functions of the popular Linux Zettabyte File System (ZFS) are offloaded to the accelerators in the ABOF. This ZFS offload is accomplished by using a new ZFS Interface for Accelerators (available at the GitHub software platform). The Linux DPU Services Module, also on GitHub, is a Linux kernel module that enables the use of DPUs from directly within the kernel, irrespective of where they exist along the data path.</p>
<p>The project underwent a successful internal demonstration following the January release of the ABOF appliance hardware and its supporting software. Collaborators included NVIDIA, which built the data processing units and provided a scalable storage fabric; Eideticom, which created the NoLoad computational storage stack used to accelerate data-intensive operations and minimize data movement; Aeon Computing, which designed and integrated each component into a storage enclosure; and SKHynix, which partnered on providing fast storage hardware. “HPC is solving the world’s most complex problems, as we enter the era of exascale AI,” said Gilad Shainer, senior vice president of networking at NVIDIA. “NVIDIA’s accelerated computing platform dramatically boosts performance for innovative exploration by pioneers such as Los Alamos National Laboratory, allowing researchers to drastically speed up breakthroughs in scientific discoveries.”</p>
<p>“The Next Generation Open Storage Architecture enables a new level of performance and efficiency thanks to its hardware-software co-design, open standards and innovative use of technologies such as DPUs, NVMe and Computational Storage.” said Stephen Bates, Chief Technology Officer at Eideticom. “Eideticom is proud to work with Los Alamos National Laboratory and the other partners to develop the computational storage stack used to showcase how this architecture can achieve these new levels of performance and efficiency. The efficient use of accelerators, coupled with innovative software and open standards are key to the next generation of data-centers.”</p>
<p>“Developing a cutting-edge storage product with an end-user has been a very positive experience,” said Doug Johnson, cofounder of Aeon Computing. “Working together with the technology vendors and end- user in collaboration allowed for rapid iteration and enhancement of a new type of storage product that will serve the most important goal a product can have, acceleration of the end-user’s workflow.”</p>
<p>“SK hynix joined this collaboration building ABOF because we understand the need for a new flash memory-based system that can accelerate data analysis,” said Jin Lim, vice president of the Solution Lab at SK Hynix. “Building on this showcase technology, we are committed to work with the collaboration partners in further defining the new architecture of the computational storage device and requirements that are critical to its best use cases.” Building on the file system acceleration project, researchers plan to next pursue integrating a set of common analysis functions in the system. That functionality would allow scientists to analyze the data using the existing programming, potentially warding off the need for additional data movement and supercomputing resources. This functionality would be specialized and tailored to the scientific community – another robust tool for tackling the complicated, data-intensive questions that underlie the challenges in our world.</p>
<p>&nbsp;</p>
<p><strong>About Aeon Computing</strong></p>
<p>Aeon Computing is based in San Diego, California and has over 55 years of staff experience in high performance computing, enterprise computing architectures, and data storage, with a focus on architecting perfectly suited customer solutions. Their customers include academic, government, and commercial institutions that prefer high performance design over stock solutions.</p>
<p><strong>About Los Alamos</strong></p>
<p>Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWX Technologies, Inc. and URS for the Department of Energy&#8217;s National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.</p>
<p>Contact</p>
<p>Doug Johnson<br />
Co-founder, Aeon Computing<br />
858.412.3810<br />
doug.johnson@aeoncomputing.com<br />
www.aeoncomputing.com</p>
<p><strong>Follow Aeon at @AeonComputing</strong></p><p>The post <a href="https://www.aeoncomputing.com/the-accelerated-box-of-flash-accelerating-intensive-data-operations-with-computational-storage/">The Accelerated Box of Flash: Accelerating Intensive Data Operations with Computational Storage</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Los Alamos National Labs Selects Aeon Computing’s Lustre/OpenZFS for 28PB Lustre file system</title>
		<link>https://www.aeoncomputing.com/aeon-lustre-lanl-28pb/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=aeon-lustre-lanl-28pb</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Thu, 19 Nov 2015 14:52:03 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<category><![CDATA[Lustre]]></category>
		<category><![CDATA[OpenZFS]]></category>
		<category><![CDATA[ZFS]]></category>
		<guid isPermaLink="false">http://www.aeoncomputing.com/?p=3152</guid>

					<description><![CDATA[<p>Los Alamos National Labs Selects Aeon Computing’s Next Generation of Supercomputing Infrastructure</p>
<p>Aeon Computing delivers two site-wide Lustre File Systems to meet existing and future demands for parallel access data storage performance for laboratory’s technical computing program.</p>
<p>The post <a href="https://www.aeoncomputing.com/aeon-lustre-lanl-28pb/">Los Alamos National Labs Selects Aeon Computing’s Lustre/OpenZFS for 28PB Lustre file system</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<h4>For Immediate Release</h4>
<h3 style="text-align: center;">Los Alamos National Labs Selects Aeon Computing’s Next Generation of Supercomputing Infrastructure</h3>
<h4>Aeon Computing delivers two site-wide Lustre File Systems to meet existing and future demands for parallel access data storage performance for laboratory’s technical computing program.</h4>
<p><strong>San Diego, November 16, 2015</strong></p>
<p>News Facts</p>
<ul>
<li>Los Alamos National Security, LLC (LANS) has selected high-performance storage from Aeon Computing to support its Advanced Simulation and Institutional Computing programs, which encompass a broad range of secure and collaborative scientific efforts that involve national security, physical and environmental sciences, cosmology, and other scientific research at Los Alamos National Laboratory (LANL).</li>
<li>LANL is one of the premier supercomputing and scientific research institutions in the world. Its mission is to solve national security challenges through scientific excellence. To support and enhance its constantly evolving environment for scientific simulations and technical computing architectures, LANL sought a high-performance, open, scalable, and reliable site-wide Lustre file system that represented the best overall value.</li>
<li>LANL selected Aeon Computing’s high-performance open Lustre Scalable Unit to meet the compute-intensive demands of several computing clusters by delivering two separate file systems. Each system featured 14 Petabytes of storage capacity and up to 160 GB/second I/O performance using Lustre on OpenZFS file system.</li>
<li>Aeon Computing’s deployment represents the largest known ZFS-based Lustre file system that does not rely on hardware-based or proprietary RAID storage technology.</li>
</ul>
<h4>Lustre on OpenZFS: Performance and Reliability Based on Open Standards</h4>
<p>Using Aeon Computing’s Lustre storage, LANL brings a large, reliable, and open standards-based performance tier data storage resource to its different HPC platforms, with shared access across its wide-ranging supercomputing environment.</p>
<p>Aeon Computing’s Lustre file system, based on its Lustre Scalable Unit, delivers 14 Petabytes at up to 160 Gigabytes per second performance over single-rail FDR14 Infiniband. Each Lustre Scalable Unit is comprised of two Lustre OSS nodes and 120 6 Terabyte Enterprise 12G SAS disk drives employing OpenZFS with raidz2 data parity protection. Additional resiliency is provided by multipath and high-availability failover connectivity, eliminating single points of failure. The two 14 Petabyte file systems deployed by LANL use 5,020 6 Terabyte disk drives combined.</p>
<p>Aeon Computing’s Lustre File System has the ability to handle a wide range of compute-driven storage and data I/O workloads, ranging from small jobs to jobs spanning many thousands of processor cores in parallel.</p>
<p>Aeon Computing, a leading HPC and Lustre file system storage vendor, has been awarded a contract by Los Alamos National Security, LLC (LANL) to provide two Lustre file systems to enhance LANL’s technical supercomputing capabilities in support of its national security mission. Each of the two Lustre file systems provides 14 Petabytes of data storage capacity and is capable of up to 160 Gigabytes per second of parallel access performance. These next-generation systems push the limits of Lustre storage performance.</p>
<p>The two 14 Petabyte Lustre file systems will serve the intense data IO workloads of both the facility-wide open research computing and the security-focused computing missions. Each file system is connected to the high-speed computing fabric, with 2.35 Terabits per second of fabric bandwidth using FDR14 Infiniband. The two Lustre file systems employ OpenZFS and high-availability for data integrity and redundancy. Each Lustre file system contains 40 Lustre OSS nodes, each capable of 4 Gigabytes per second of sustained data performance. The two Lustre file systems are powered by end-to-end enterprise-grade technology, including LSI/Avago 12G SAS (serial attached SCSI), Mellanox FDR14 Infiniband, HGST 12G Enterprise SAS disk drives, SanDisk 12G SAS SSDs, and Intel server technologies.</p>
<p>The file systems are integrated into site-wide monitoring infrastructure without the need for cumbersome or closed vendor APIs. “We were targeting an open solution that would utilize our Tri-Lab Operating System TOSS with Lustre, and provide a great performance to cost ratio,” says Kyle Lamb, Infrastructure Team Lead in the High Performance Computing Division at Los Alamos National Laboratory. “Utilizing commodity hardware and OpenZFS for RAID provides a cost-effective high performance solution with the added benefit of compression to increase available usable capacity. This allows us to provide the high density performance required for our existing clusters as well as our future Commodity Technology Systems.”</p>
<p>According to Jeff Johnson, co-founder of Aeon Computing, “We were able to architect a Lustre file system to meet LANL’s needs that was affordable and employed open standards in hardware and software. We were able to deliver a solution that met and exceeded LANL’s rigorous demands of multi-system HPC data IO and provide a system that was truly open.”</p>
<p>The Aeon Lustre Scalable Unit is a 12U system containing 120 enterprise SAS disk drives and two Lustre OSS system nodes that are fully redundant and hot-swappable, including hot-swap OSS nodes. The Lustre Scalable Unit features 12G SAS storage technology and supports single and dual raid QDR, FDR and EDR Infiniband as well as Intel’s Omnipath fabric and 10/40/100 Gigabit Ethernet. The Aeon Computing Lustre Scalable Unit is available for sale and can be used in a wide range of Lustre file system designs.</p>
<p>Visit Aeon Computing in booth 1746 at SC15 to see their Lustre Scalable Unit and Eclipse-NV NVMe storage system.</p>
<p><strong>About Aeon Computing</strong></p>
<p>Aeon Computing is based in San Diego, California and has over 55 years of staff experience in high performance computing, enterprise computing architectures, and data storage, with a focus on architecting perfectly suited customer solutions. Their customers include academic, government, and commercial institutions that prefer high performance design over stock solutions.</p>
<p><strong>About Los Alamos</strong></p>
<p>Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is operated by Los Alamos National Security, LLC, a team composed of Bechtel National, the University of California, BWX Technologies, Inc. and URS for the Department of Energy&#8217;s National Nuclear Security Administration. Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.</p>
<p>Contact</p>
<p>Doug Johnson<br />
Co-founder, Aeon Computing<br />
858.412.3810<br />
doug.johnson@aeoncomputing.com<br />
www.aeoncomputing.com</p>
<p><strong>Follow Aeon at @AeonComputing</strong></p><p>The post <a href="https://www.aeoncomputing.com/aeon-lustre-lanl-28pb/">Los Alamos National Labs Selects Aeon Computing’s Lustre/OpenZFS for 28PB Lustre file system</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Aeon Computing to Deploy 7 Petabyte Lustre Filesystem for SDSCs Comet Supercomputer</title>
		<link>https://www.aeoncomputing.com/aeon-computing-to-deploy-7-petabyte-lustre-filesystem-for-sdsc%c2%92s-comet-supercomputer/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=aeon-computing-to-deploy-7-petabyte-lustre-filesystem-for-sdsc%25c2%2592s-comet-supercomputer</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Thu, 21 Nov 2013 11:47:50 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<category><![CDATA[Data Oasis]]></category>
		<category><![CDATA[Filesystem]]></category>
		<category><![CDATA[Gigabytes]]></category>
		<category><![CDATA[Lustre]]></category>
		<category><![CDATA[NSF]]></category>
		<category><![CDATA[SDSCs]]></category>
		<category><![CDATA[Servers]]></category>
		<guid isPermaLink="false">http://www.aeoncomputing.com/?p=1</guid>

					<description><![CDATA[<p>Aeon Computing to deploy 7 Petabyte Lustre file system for SDSCs Comet supercomputer, exceeding 200 GB per second. Aeon continues to provide affordable Lustre performance to research computing. SC13 DENVER&#8211;(BUSINESS WIRE)-Aeon Computing, a leading HPC and storage vendor, has been selected by the San Diego Supercomputer Center (SDSC) at the University of California, San Diego,</p>
<p>The post <a href="https://www.aeoncomputing.com/aeon-computing-to-deploy-7-petabyte-lustre-filesystem-for-sdsc%c2%92s-comet-supercomputer/">Aeon Computing to Deploy 7 Petabyte Lustre Filesystem for SDSCs Comet Supercomputer</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Aeon Computing to deploy 7 Petabyte Lustre file system for SDSCs Comet supercomputer, exceeding 200 GB per second. Aeon continues to provide affordable Lustre performance to research computing.<span id="more-1"></span></p>
<h1>SC13</h1>
<p><strong>DENVER&#8211;(BUSINESS WIRE)</strong>-Aeon Computing, a leading HPC and storage vendor, has been selected by the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, to build and deploy a 7 Petabyte (PB) Lustre parallel file system as part of the new petascale-level Comet supercomputer. The performance of the 7 PB (Petabyte) filesystem will exceed 200 Gigabytes per second.</p>
<blockquote><p>“Aeon is a key partner in providing hardware and expertise for our Data Oasis parallel file system, which now serves our Gordon and Trestles systems and will support our Comet cluster when it comes online in 2015.”</p></blockquote>
<p><strong>SDSC</strong> was awarded a $12-million grant from the National Science Foundation (NSF) to deploy the new Comet system. Comet will be based on next-generation Intel Xeon processors. In addition to the optimized Aeon Computing 7 PB Lustre filesystem, each node will be equipped with two processors, 128 GB (gigabytes) of traditional DRAM, and 320 GB of flash memory. Comet is designed to optimize capacity for modest-scale jobs, with each rack of 72 nodes having a full bisection InfiniBand FDR interconnect, with a 4:1 bisection interconnect across the racks.</p>
<p>This next-generation Lustre file system pushes the limits of Lustre storage performance, and will be an addition to SDSC’s Data Oasis Lustre resource. “We welcome Aeon as a partner in the Comet program,” said SDSC Deputy Director Richard Moore. “Aeon is a key partner in providing hardware and expertise for our Data Oasis parallel file system, which now serves our Gordon and Trestles systems and will support our Comet cluster when it comes online in 2015.”</p>
<blockquote><p>“SDSC pushes the envelope in research computing, and their requirements for high performance, affordability, and maintainability fit perfectly with Aeon’s product design approach,” stated Jeff Johnson, co-founder of Aeon Computing. “Working closely with SDSC’s staff and Comet stakeholders we were able to deliver a very affordable solution that meets the rigorous demands of data-intensive computing and support SDSC’s Comet initiative of HPC for the 99%.”</p></blockquote>
<p>The new file system will be designed around the Intel® next-generation Xeon processor and the latest in SAS storage technology, and is heavily based on commercial off-the-shelf (COTS) components.</p>
<p style="text-align: left;">For details and an expanded technical discussion on Lustre filesystems and other high performance computing solutions, visit Aeon Computing in Booth 4513 at SC13 in Denver.</p>
<p><a href="http://www.businesswire.com/news/home/20131121006523/en/Aeon-Computing-Deploy-7-Petabyte-Lustre-File#.Uv3hi4WPObg" target="_blank"><a href="" class="light_button" target="">Read More</a></a></p>
<p>Source <a href="http://www.businesswire.com/news/home/20131121006523/en/Aeon-Computing-Deploy-7-Petabyte-Lustre-File#.Uv3hi4WPObg" target="_blank">Business Wire</a>.</p>
<p>&nbsp;</p><p>The post <a href="https://www.aeoncomputing.com/aeon-computing-to-deploy-7-petabyte-lustre-filesystem-for-sdsc%c2%92s-comet-supercomputer/">Aeon Computing to Deploy 7 Petabyte Lustre Filesystem for SDSCs Comet Supercomputer</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Aeon Computing EclipseSL Wins Best HPC Storage Product or Technology at SC2012 in Salt Lake City</title>
		<link>https://www.aeoncomputing.com/aeon-computing-eclipsesl-wins-best-hpc-storage-product-or-technology-at-sc2012-in-salt-lake-city-2/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=aeon-computing-eclipsesl-wins-best-hpc-storage-product-or-technology-at-sc2012-in-salt-lake-city-2</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Fri, 16 Nov 2012 13:00:31 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<guid isPermaLink="false">http://www.aeoncomputing.com/?p=2282</guid>

					<description><![CDATA[<p>For Immediate Release Aeon Computing EclipseSL Wins Best HPC Storage Product or Technology at SC2012 in Salt Lake City Company reshaping Lustre storage design with their next generation Lustre Storage Appliance. Salt Lake City, Nov. 12, 2012  Aeon Computing, a leading HPC and storage vendor, has been awarded the 2012 HPCwire Best HPC Storage</p>
<p>The post <a href="https://www.aeoncomputing.com/aeon-computing-eclipsesl-wins-best-hpc-storage-product-or-technology-at-sc2012-in-salt-lake-city-2/">Aeon Computing EclipseSL Wins Best HPC Storage Product or Technology at SC2012 in Salt Lake City</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<h1>For Immediate Release</h1>
<p>Aeon Computing EclipseSL Wins Best HPC Storage Product or Technology at SC2012 in Salt Lake City</p>
<p>Company reshaping Lustre storage design with their next generation Lustre Storage Appliance.</p>
<p>Salt Lake City, Nov. 12, 2012  Aeon Computing, a leading HPC and storage vendor, has been awarded the 2012 HPCwire Best HPC Storage Product or Technology award for their EclipseSL Lustre appliance that is the foundation of the 4 PetaByte Data Oasis storage system at San Diego Supercomputing Center (SDSC). This next generation system pushes the limits of Lustre storage performance.</p>
<p>The Data Oasis system supports three major client clusters, Triton, Trestles, and Gordon, using different bridging technologies; Myrinet-to-10 gigabit Ethernet (GbE) bridge (320 Gb/s), InfiniBand-to-Ethernet bridge (240Gb/s), and direct Lustre routing nodes. The EclipseSL provides 4 PetaBytes of storage with a sustained 100 GB/s data rate. Data Oasis was built with 64 Aeon EclipseSL storage building blocks, which constitute the systems Object Storage Servers (OSSs). Each of these is an I/O powerhouse in their own right with 36 high-speed SAS drives, and two dual-port 10GbE network cards; each OSS delivers sustained rates of over 2GB/s to remote clients. Data Oasis capacity and bandwidth are expandable with additional OSSs, at commodity pricing levels.</p>
<p>We believe that this is the largest and fastest implementation of an all-Ethernet Lustre storage system, said Phil Papadopoulos, SDSCs chief technical officer, who is responsible for the centers data storage systems. &#8220;In our open procurement process, Aeon responded with a super-charged design that efficiently utilized all available data pathways in the system including dual QPI, dual 10 Gigabit Ethernet, and dual, blistering-fast RAID controllers. They fundamentally changed our perspective on how to efficiently scale out and hit 100GB/sec of sustained throughput in just 64 storage nodes&#8221; continued Papadopoulos.</p>
<p>SDSCs requirements for high performance, affordability, and maintainability really pushed the envelope, according to Jeff Johnson, co-founder of Aeon Computing. By working closely with SDSCs engineers and systems staff, we were able to deliver a solution that meets the rigorous demands of data-intensive computing.</p>
<p>The new IntelŽ E5 based EclipseSL system is a 4U form factor storage device that provides up to 144 TB (up from 108TB), supports PCI Gen3, and has demonstrated data rates approaching 5GB/s (up from 3.8 GB/s). For more details and an expanded technical discussion on a EclipseSL based Lustre filesystem, visit Aeon in Booth 2119 at SC12 in Salt Lake City. The next generation EclipseSL is available for sale and can be used as a standalone storage device or as a high-end component in advanced storage system designs.</p>
<h1>About Aeon Computing</h1>
<p>Aeon Computing has over 55 years of staff experience in high performance computing, enterprise computing architectures, and data storage, with a focus on architecting perfectly suited customer solutions. Aeons approach is to learn about their customers research, needs, and challenges before proposing a solution. Their customers include academic, government, and commercial institutions that prefer high performance design over stock solutions.</p>
<h1>About SDSC</h1>
<p>Founded in 1985, the San Diego Supercomputer Center (SDSC) enables international science and engineering discoveries through advances in computational science and data-intensive, high-performance computing. SDSC is an Organized Research Unit of the University of California, San Diego with a staff of talented scientists, software developers, and support personnel.</p>
<h1>About the HPCwire Awards</h1>
<p>The HPCwire Awards originated in 2003, as an annual event to honor thought leaders in the HPC community at our biggest IEEE conference of the year, The International Conference for High Performance Computing: http://sc12.supercomputing.org.</p>
<h3>For more information, please contact</h3>
<p><strong> Doug Johnson</strong>, Co-founder, doug.johnson@aeoncomputing.com, <strong>619.316.3940</strong><br />
<strong> Jeff Johnson</strong>, Co-founder, jeff.johnson@aeoncomputing.com, <strong>619-204-9061</strong><br />
 <strong>Peter Pelekis</strong>, Co-founder, peter.pelekis@aeoncomputing.com, <strong>858-967-9879</strong><br />
 <strong>Greg Faussette</strong>, Director of Sales, greg.faussette@aeoncomputing.com, <strong>714-267-8200</strong></p>
<p><strong>www.aeoncomputing.com</strong></p><p>The post <a href="https://www.aeoncomputing.com/aeon-computing-eclipsesl-wins-best-hpc-storage-product-or-technology-at-sc2012-in-salt-lake-city-2/">Aeon Computing EclipseSL Wins Best HPC Storage Product or Technology at SC2012 in Salt Lake City</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Aeon Computing deploys 18PB storage resource to major Wall Street trading firm</title>
		<link>https://www.aeoncomputing.com/aeon-computing-deploys-18pb-storage-resource-to-major-wall-street-trading-firm/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=aeon-computing-deploys-18pb-storage-resource-to-major-wall-street-trading-firm</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Tue, 10 Apr 2012 15:26:58 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<category><![CDATA[18PB]]></category>
		<category><![CDATA[6Gb SAS]]></category>
		<category><![CDATA[fully-redundant]]></category>
		<category><![CDATA[high-availability]]></category>
		<guid isPermaLink="false">http://theretailer.getbowtied.com/blank/?p=962</guid>

					<description><![CDATA[<p>Aeon Computing deploys 18PB storage resource to major Wall Street trading firm. The storage resource based on a high-availability, fully-redundant, 6Gb SAS end-to-end architecture was simultaneously deployed to four data centers around the world.</p>
<p>The post <a href="https://www.aeoncomputing.com/aeon-computing-deploys-18pb-storage-resource-to-major-wall-street-trading-firm/">Aeon Computing deploys 18PB storage resource to major Wall Street trading firm</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Aeon Computing deploys 18PB storage resource to major Wall Street trading firm. The storage resource based on a high-availability, fully-redundant, 6Gb SAS end-to-end architecture was simultaneously deployed to four data centers around the world.</p><p>The post <a href="https://www.aeoncomputing.com/aeon-computing-deploys-18pb-storage-resource-to-major-wall-street-trading-firm/">Aeon Computing deploys 18PB storage resource to major Wall Street trading firm</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>T-Platforms Signs Strategic Reseller Agreement With Aeon Computing</title>
		<link>https://www.aeoncomputing.com/t-platforms-signs-strategic-reseller-agreement-with-aeon-computing/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=t-platforms-signs-strategic-reseller-agreement-with-aeon-computing</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Wed, 15 Jun 2011 13:09:00 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<guid isPermaLink="false">http://www.aeoncomputing.com/?p=2288</guid>

					<description><![CDATA[<p>MOSCOW &#38; SAN DIEGO&#8211;(BUSINESS WIRE) T-Platforms, a leading global HPC company providing comprehensive supercomputing systems, software and services, and AEON Computing, a U.S.-based supplier of HPC clusters, servers, workstations and custom solutions, today announced the signing of a strategic reseller agreement between the two companies. As part of this new agreement, AEON Computing will supply</p>
<p>The post <a href="https://www.aeoncomputing.com/t-platforms-signs-strategic-reseller-agreement-with-aeon-computing/">T-Platforms Signs Strategic Reseller Agreement With Aeon Computing</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<h1>MOSCOW &amp; SAN DIEGO&#8211;(BUSINESS WIRE)</h1>
<p>T-Platforms, a leading global HPC company providing comprehensive supercomputing systems, software and services, and AEON Computing, a U.S.-based supplier of HPC clusters, servers, workstations and custom solutions, today announced the signing of a strategic reseller agreement between the two companies. As part of this new agreement, AEON Computing will supply T-Platforms systems, components and integrated solutions to the U.S. high performance computing market.</p>
<p>The companies are targeting the high-end scientific and technical computing market where customers require balanced and powerful computing solutions with large storage and fast networking requirements. Anticipated users will range from the largest government and academic research labs to industries such as aerospace, automotive, manufacturing, financial, energy research and bioinformatics.</p>
<p>Signing this partnership agreement with AEON Computing enables T-Platforms to deliver our leading-edge, high-density solutions to U.S. customers, and ultimately this combined effort will lead to a more diversified product portfolio as we adapt our technology to the needs of these new markets, said Alexey Komkov marketing director for T-Platforms. We are very excited to be working with AEON, an ideal partner with their strong technical expertise and deep market knowledge.<br />
The technological development and success of T-Platforms, cultivated by almost 200 system installations, is a tremendous foundation for this new partnership, said Jeff Johnson, AEON computing. T-Platforms leads the market in terms of density and they have extensive, ongoing R&amp;D efforts in many other areas of HPC system development. We look forward to working closely with T-Platforms to deliver highly integrated, well balanced HPC solutions to our customers.</p>
<h1>Contacts</h1>
<p>T-Platforms<br />
<strong>Andrey Mitrofanov</strong><br />
T-Platforms PR Manager<br />
+7 926 697-22-22<br />
Andrey.Mitrofanov@t-platforms.ru<br />
or</p>
<p><strong> AEON Computing</strong><br />
Jeff Johnson<br />
858-412-3810<br />
jeff.johnson@aeoncomputing.com</p>
<p><a href="http://http://aeoncomputing.com" target="_blank"><a href="" class="light_button" target="">Read More</a></a></p><p>The post <a href="https://www.aeoncomputing.com/t-platforms-signs-strategic-reseller-agreement-with-aeon-computing/">T-Platforms Signs Strategic Reseller Agreement With Aeon Computing</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Aeon Computing Deploys 1 Petabyte Lustre Parallel Filesystem for the US Department of Energy</title>
		<link>https://www.aeoncomputing.com/ready-made-future-cliche/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ready-made-future-cliche</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Thu, 08 Apr 2010 15:25:02 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<category><![CDATA[Filesystem]]></category>
		<category><![CDATA[IntelŹ Xeon]]></category>
		<category><![CDATA[Lustre]]></category>
		<category><![CDATA[Processor]]></category>
		<category><![CDATA[QDR]]></category>
		<guid isPermaLink="false">http://theretailer.getbowtied.com/blank/?p=960</guid>

					<description><![CDATA[<p>Aeon Computing deploys 1 petabyte (1PB) Lustre parallel filesystem for the US Department of Energy and University of Maryland. The filesystem design is comprised of 48 QDR Infiniband connections, 736 2.0 Terabyte disk drives and IntelŽ Xeon 5500 processors and will be deployed concurrently with a new 448 processor IntelŽ Xeon 5500 based cluster and</p>
<p>The post <a href="https://www.aeoncomputing.com/ready-made-future-cliche/">Aeon Computing Deploys 1 Petabyte Lustre Parallel Filesystem for the US Department of Energy</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Aeon Computing deploys 1 petabyte (1PB) Lustre parallel filesystem for the US Department of Energy and University of Maryland. The filesystem design is comprised of 48 QDR Infiniband connections, 736 2.0 Terabyte disk drives and IntelŽ Xeon 5500 processors and will be deployed concurrently with a new 448 processor IntelŽ Xeon 5500 based cluster and QDR Infiniband fabric. The awarded contract, in excess of $1 million, is funded by the American Recovery and Reinvestment Act of 2009 (ARRA).</p><p>The post <a href="https://www.aeoncomputing.com/ready-made-future-cliche/">Aeon Computing Deploys 1 Petabyte Lustre Parallel Filesystem for the US Department of Energy</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Aeon Computing Launches New Eclipse7</title>
		<link>https://www.aeoncomputing.com/aeon-computing-launches-new-eclipse7%c2%99/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=aeon-computing-launches-new-eclipse7%25c2%2599</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Tue, 16 Mar 2010 13:06:37 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<category><![CDATA[Eclipse7]]></category>
		<guid isPermaLink="false">http://www.aeoncomputing.com/?p=2285</guid>

					<description><![CDATA[<p>Aeon Computing launches new Eclipse7 compute and storage platforms powered by latest Intel Xeon 5600 processor.</p>
<p>The post <a href="https://www.aeoncomputing.com/aeon-computing-launches-new-eclipse7%c2%99/">Aeon Computing Launches New Eclipse7</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Aeon Computing launches new Eclipse7 compute and storage platforms powered by latest Intel Xeon 5600 processor.</p><p>The post <a href="https://www.aeoncomputing.com/aeon-computing-launches-new-eclipse7%c2%99/">Aeon Computing Launches New Eclipse7</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Aeon Computing Installs Horizon RAID Based Storage Array at a Major Wall Street Investment Bank</title>
		<link>https://www.aeoncomputing.com/aeon-computing-installs-horizon-raid-based-storage-array-at-a-major-wall-street-investment-bank/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=aeon-computing-installs-horizon-raid-based-storage-array-at-a-major-wall-street-investment-bank</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Wed, 20 Jan 2010 11:11:27 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<category><![CDATA[Horizon RAID]]></category>
		<category><![CDATA[Petabyte]]></category>
		<guid isPermaLink="false">http://www.aeoncomputing.com/?p=2261</guid>

					<description><![CDATA[<p>Aeon Computing installs one petabyte (1PB) Horizon RAID based storage array at a major Wall Street investment bank.</p>
<p>The post <a href="https://www.aeoncomputing.com/aeon-computing-installs-horizon-raid-based-storage-array-at-a-major-wall-street-investment-bank/">Aeon Computing Installs Horizon RAID Based Storage Array at a Major Wall Street Investment Bank</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Aeon Computing installs one petabyte (1PB) Horizon RAID based storage array at a major Wall Street investment bank.</p><p>The post <a href="https://www.aeoncomputing.com/aeon-computing-installs-horizon-raid-based-storage-array-at-a-major-wall-street-investment-bank/">Aeon Computing Installs Horizon RAID Based Storage Array at a Major Wall Street Investment Bank</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Aeon Computing Awarded Infiniband Specialist Certification by QLogic Corporation</title>
		<link>https://www.aeoncomputing.com/aeon-computing-awarded-infiniband-specialist-certification-by-qlogic-corporation/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=aeon-computing-awarded-infiniband-specialist-certification-by-qlogic-corporation</link>
		
		<dc:creator><![CDATA[Jeff]]></dc:creator>
		<pubDate>Sat, 05 Dec 2009 11:10:21 +0000</pubDate>
				<category><![CDATA[Press Releases]]></category>
		<category><![CDATA[Corporation]]></category>
		<category><![CDATA[QLogic]]></category>
		<category><![CDATA[Specialist]]></category>
		<guid isPermaLink="false">http://www.aeoncomputing.com/?p=2259</guid>

					<description><![CDATA[<p>Aeon Computing awarded Infiniband Specialist Certification by QLogic Corporation for expertise in design, configuration, deployment and support of QLogic based Infiniband fabrics.</p>
<p>The post <a href="https://www.aeoncomputing.com/aeon-computing-awarded-infiniband-specialist-certification-by-qlogic-corporation/">Aeon Computing Awarded Infiniband Specialist Certification by QLogic Corporation</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Aeon Computing awarded Infiniband Specialist Certification by QLogic Corporation for expertise in design, configuration, deployment and support of QLogic based Infiniband fabrics.</p><p>The post <a href="https://www.aeoncomputing.com/aeon-computing-awarded-infiniband-specialist-certification-by-qlogic-corporation/">Aeon Computing Awarded Infiniband Specialist Certification by QLogic Corporation</a> first appeared on <a href="https://www.aeoncomputing.com">Aeon Computing</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
