Artificial Intelligence and inference – how improvements in Machine Learning are advancing the state of the art
Understand the importance inference has in AI/ML and get a glimpse of how we are meeting this challenge
Advancements in Artificial Intelligence (AI) have many aspects that can be used as performance measurements. Hardware performance indicators such as Petaflops – or a given machine’s ability to process instructions every second – provide a measurement of how capable the hardware may be but improvements in AI get much more complicated.
Since AI use cases vary, their performance indicators will vary as well – this is where inference comes into play.
Inference is a systems’ ability to correctly make connections between 2 pieces of truthful information. With regards to AI, the challenge for the past decade has been to ‘teach’ a system how to learn so that it can begin to make these types of inferences on its own.
We take for granted how much of our human intelligence is wired yet from the moment we entered this world our ‘sensors’ begin the process of ingesting information and developing meaningful connections from our very limited knowledge set.
Sight, sound, touch, taste and even smell bombarded our small minds with information about who it is that is speaking and what their possible intentions communicated.
As we grew older and learned more about our surroundings, we could leverage this knowledge into more structured learning. Beginning school, we did not need to learn the density of certain materials in order to understand that rocks are hard and would hurt if they were hurled toward you or someone else. That information had been learned (possibly the hard way) much earlier than pre-school or kindergarten.
This where the necessity of Machine Learning (ML) emerged. Creating a kind of knowledge set whereby the AI did not require to re-learn the basics every time it was employed.
As more advanced ML models were developed, new problems emerged – creating the connections between learned information.
The famous phrase: “One morning I shot an Elephant wearing my pyjamas” could be interpreted multiple ways. Humans can understand the humour behind the phrase but a machine cannot – because to a machine – why wouldn’t an Elephant be wearing your pyjamas?
Having an understanding of material sets, size differentials and zoology is helpful in parsing the phrase and inferring the humour.
Using machines to teach other machines is extremely helpful in removing the tedious nature of having to create these types of connections for AI but the process still requires human input to establish whether the machine is teaching correctly.
Over the past few years, we have developed a process that allows AI and ML to work together much more efficiently. More importantly, this system is modular so functions and capabilities can be added, removed or reconfigured as needed for even more customisation depending on the end use scenario.
In the coming months, we will be covering the specifics behind this system but our early results are extremely encouraging. These processes are already in production with our WeedRemeed system and we are expanding the applications for this system rapidly.