Artificial intelligence is changing ways to store businesses and reach their data. This is because the traditional data storage system was designed to handle a simple command from a handful of users at once, while today, the AI system with millions of agents needs to continuously reach and process large amounts of data in parallel. Traditional data storage systems now have layers of complexity, which slows down the AI system as data should pass through several levels before reaching graphical processing units (GPUs) that are brain cells of AI.
Cloudian co-established by Michael Tso, SM ’93 and Hiroshi Ohta, is helping to maintain storage with AI revolution. The company has developed a scalable storage system for businesses that basically help the data flow between storage and AI models. By applying parallel computing to the system data storage, the AI functions by consolidating the AI function and data on a parallel-processing platform, which stores scalable datasette with storage and direct, high-motion transfer between storage and direct, high-motion between GPU and CPU, and processes.
The integrated storage-computing platform of Cloudian simplifies the process of manufacturing AI tools on commercial-prime and gives businesses a storage foundation that can lay with the rise of AI.
“One of the things that people remember about AI is that it is all about data,” Tso says. “You cannot improve 10 percent of AI performance with 10 percent more data or even 10 times more data – you need 1,000 times more data. To be able to store the data that is easy to manage, and in such a way that you can embed the calculations in it so that you are coming without transferring data – where it is going on.”
From MIT to industry
As a graduate in MIT in the 1990s, TSO was introduced for parallel computing by Professor William Daily – a type of calculation in which several calculations are together. TSO also worked on parallel computing with Associate Professor Greg Papadopolos.
“This was an incredible time because most of the schools had a super-computing project running-four in the MIT,” TSOs remember.
As a graduate student, TSO worked with MIT’s senior research scientist David Clarke, a computing pioneer that contributed to the early architecture of the Internet, especially the Transmission Control Protocol (TCP) that distributes data among the system.
“As a graduate student at MIT, I disconnected to massive systems and worked on intermittent networking operations,” says TSO. “It’s fun – at 30 years, this is what I am still doing today.”
After his graduation, TSO worked at Intel’s Architecture Lab, where he invented the data synchronized algorithm used by Blackberry. He also made specifications for Nokia, who ignited the ringtone download industry. He then joined the incarnation, which is a startup co-installed by Eric Brever SM ’92, PhD ’94, which pioneered the search and web material distribution technologies.
In 2001, TSO introduced Mithun mobile technologies with Joseph Norton ’93, SM ’93 and others. The company created the world’s largest mobile messaging system to handle the large scale data growth from the camera phone. Then, in the late 2000s, cloud computing became a powerful way to hire the virtual server for businesses as they extended their operations. TSO noticed that the amount of data being collected was growing faster than the speed of networking, so they decided to pive the company.
“The data is being made at a lot of different places, and that data has its own gravity: it is going to spend you money and time to transfer it,” says TSO. “This means that the final position is a distributed cloud that reaches the tools and servers. You have to bring the cloud into the data, not the data on the cloud.”
TSO officially launched Cloudians from Mithun Mobile Technologies in 2012, with a new emphasis on helping customers with scalable, distributed, cloud-compatible data storage.
“When we first launched the company, what we did not see was that the AI was going to be a final use for data on the edge,” TSO says.
Although TSO’s research in MIT began more than two decades ago, he sees a strong relationship between today and the industry.
“This is playing back to my whole life because David Clarke and I were disconnected and worked with a stalled network, which are part of the use of every shore today, and Professor Daily was working on very fast, scalable interconnects,” TSO says, Given that the daily is the chief president and chief scientist who is now in the NVIDIA. “Now, when you look at the modern Nvidia chip architecture and the way they do intercip communication, it has got the work of Dal. With Professor Papadopolos, I worked on application software with parallel computing hardware without re -writing applications, and it is working perfectly.
Today the platform of Cloudian uses an object storage architecture that is stored as a unique object with all types of data – ducts, videos, sensor data – with metadata. Object storage can manage a large -scale dataset in a flat file stability, making it ideal for unnecessary data and AI systems, but it is traditionally not able to send data directly to AI models, without copying data in the first computer’s memory system, which produces delays and energy bodies for businesses.
In July, Cloudian announced that it has increased its object storage system with a vector database that stores data into a form that is immediately usable by the AI model. As the data is swallowed, Claudian is calculating the vector form of that data in real time, such as recommended engine, search and power to power AI tools such as AI assistants. Cloudian also announced a partnership with Nvidia that allows its storage system to work directly with the AI company’s GPU. Claudian says that the new system enables AI operations even faster and reduces computing costs.
“Nvidia contacted us about one and a half years ago because the GPUs are useful only with data that keep them busy,” says TSO. “Now that people are realizing that it is easy to move AI into data, it is to move a huge dataset. Our storage systems embed many AI functions, so we are capable of pre-post-process data for AI where we collect and stores data.”
AI-first storage
Cloudians are helping around 1,000 companies worldwide receive more than their data, including large manufacturers, financial services providers, health care organizations, and government agencies.
For example, the storage platform of the cloudian is helping a large automaker, for example, use AI to determine that each of its construction robots need to be serving. Cloudians are also working with the National Library of Medicine to store research articles and patents, and the National Cancer Database – Rich Dataset that AI models can process to develop new remedies or achieve new insight.
“The GPU has been an incredible enabler,” says Tso. “Moore’s law doubles the amount of calculations every two years, but is able to parallel the operation on the GPU chips, so you can network the GPU together and shatter the rules of the Moore. This scale is beating AI on the new levels of intelligence, but the only way to work hard to the GPU is to remove the same.”