There is plenty of buzz around high-performance computing (HPC), especially with Luxembourg’s acquisition of its own supercomputer MeluXina last year. MeluXina’s hardware can process data in gigantic volumes, namely ten million billion operations per second, or ten “petaflops”.
Impressive. But what does it actually mean?
At a recent webinar run by Luxinnovation, senior solutions engineer Luis Vela Vela of LuxProvide, the company that manages access to MeluXina, introduced the basics of HPC as well as three examples of how businesses could use it.
The basics of AI
Vela started by mentioning two elements: data and the relationships with that data, which together make up a world. (A world. Not the world.) This world, combined with the ability to apply those relationships towards something, is artificial intelligence.
It isn’t quite that simple, he went on, but those are the basics. The relevance of data, of course, is that it’s a resource growing exponentially in volume. “If we’re not already drowning in an abundance of data, then we certainly will be in the future,” he said.
The trick is finding those relationships between data points. For that, Vela explained, there are many techniques—which is where machine learning and deep learning come in.
His explanation can be summarised as follows: Imagine an AI that is purposed towards identifying images. The input is a photo of something, and the AI is meant to produce an output of one of two phrases: “car” or “not car”. Machine learning involves the intervention of a human being who extracts some features that will help the computer in its classification, whereas deep learning (mostly) skips this intervention step, starting from scratch and carrying out the process by itself.
“Both of them are valid approaches, depending on the end result that you expect to have,” commented the engineer.
Use case one: predictive maintenance
Predictive maintenance, Vela explained, is “a smart way of doing maintenance”, for instance of equipment you might have at a factory, home or office. Simpler approaches to maintenance would include reactive maintenance, where you fix something only after it breaks; or preventative maintenance, where you fix or regulate parts at regular intervals, i.e. before it can break.
The predictive method, however, uses existing data about the equipment in question in order to predict when a failure will occur, so you will have an optimal window in which to make repairs. When a problem or potential problem is detected, the AI furthermore learns about it and can update the original model that it’s using to assess the object in the first place. This is far more efficient than the other two methods.
Use case two: natural language models
Natural language models are what enables AI to seemingly “understand” us when we speak, write or draw. Vela spoke of “a big bag of numbers, with weights and biases” that is used to predict the next word in a sentence. First you train the language model by feeding it massive datasets (“300 billion tokens of text” he quoted as an example), after which additional input on right and wrong predictions will improve the “intelligence” of the model.
Such models are what power chatbots, for instance, as well as something called a “no code approach”. Imagine (said Vela) that you, as a businessperson, wish to calculate the revenue of users in a huge database, an action for which you need an SQL command. You don’t know how to create this command, however. With the correct AI tool, you can ask your question in plain text and it can be converted into the SQL query you need.
Use case three: reduced order models
Vela’s final example was reduced order models. This is about using a supercomputer to create an extremely detailed simulation of (for example) wind patterns, something far too complex for your laptop to handle—but then converting it into a model that your laptop can handle, and at quick speeds.
For anyone who does prototyping or the simulating of new manufacturing techniques, Vela stressed, reduced order models could be very interesting. There is a cost up front to use MeluXina to train the model, but after that you can run it on your laptop easily. Vela cited one more example from an AI company called Monolith, which wanted to evaluate the stress on different designs of wind turbine blades. The team used 600 simulations to train a reduced order model that could easily evaluate the stress on the designs, saving them huge amounts of time.
Luxinnovation’s “HPC Thursdays” series currently runs every other week. Find out more .