Last August we got to experience the sixth edition of the Ai4 conference. This is the largest independently organized conference in the field of Artificial Intelligence for the industry. This year, the Ai4 hosted more than 5000 participants, doubling last year's attendance. Unlike other technical conferences like the CVPR, NeurIPS or ICCV, Ai4 is geared completely towards the industry. This resulted in a wider variety of attendees, from researchers and engineers, to founders, management and sales teams.
This year RidgeRun.ai was fortunate enough to be a part of it. Here's some of our favorite talks and moments from the conference. Enjoy!
Geoffrey Hinton's Keynote
The opening keynote of the conference was held by one of the godfathers of AI: Geoffrey Hinton. With the confidence you would expect of a living legend, Mr. Hinton started off by walking us through the architecture of an artificial neural network and how you can apply backpropagation to make them learn. These advancements allowed him to train, what he believes is, the very first language model during mid 1980s.
I like to think of it as the precursor of the LLM we have today. It wasn't very capable, but it clearly showed language patterns being applied.
Geoffrey Hinton
At that time there were multiple contradicting theories when it came to language. Most notably, Noam Chomsky held the position that language could not be learned through experience, but instead is driven by innate cognitive structures hardwired into the human brain. If this were to be true, Mr. Hinton's efforts to model language algorithmically would be a dead end.
I asked ChatGPT to help me write a paragraph on why Chomsky was wrong.
Modern LLMs have demonstrated that language patterns can indeed be learned. There is a lot of controversy around whether these models are intelligent or simply statistical prediction machines. Some researchers think that in order to achieve true artificial intelligence a completely different technique needs to be pursued: neuro symbolic computation. However, Geoffrey thinks that LLMs, as they are, are already intelligent systems, capable of reasoning and making informed decisions!
I believe that current LLMs are accurate mathematical models of our language.
Geoffrey Hinton
Then, things got even more interesting. Mr Hinton, well known for his stance regarding the AI apocalypse, shared his thoughts on the risks that artificial intelligence poses to humanity. On the one hand the near-term risks:
Fake images, voices and video
Massive job losses
Lethal autonomous weapons
Cyber crime and deliberate pandemics
Discrimination and bias
Nonetheless, he makes it clear that "AI will be immensely helpful in areas like healthcare which is why its development cannot be stopped". However, we made very clear his disagreement with open sourcing the weights of foundational large scale models.
I think it's a terrible idea to open source the weights. Bad actors will start making use of them for evil purposes. I support open sourcing the source code but not the weights. There will be no community of volunteers improving the weights, as happens with open source code.
Geoffrey Hinton
Finally he shared his thoughts on the long-term existential risks. Mr. Hinton is concerned that a human extinction could happen if AI systems become much smarter than us.
This possibility is NOT science fiction.
Geoffrey Hinton
While now it seems like AI cannot get much smarter due to computational capability limits, AI systems can improve in other ways. One clear example is how multiple AI agents can become experts in different fields and share knowledge by sharing weights or gradients. This, in fact, could represent a way to scale in such a way that is not as easy for us humans.
The AI Snowball Event
Another interesting keynote was held by Matt Wood, VP of Products at AWS. He starts off by saying that, throughout history, there have been a few "snowball events" that have meant a breakthrough in human history. We, as engineers, entrepreneurs and builders, should be on the lookout for these events. Unfortunately, these are very rare.
Snowball events occur only a handful per generation. They are unusual, rare and hard to predict.
Matt Wood
Mr. Wood then continues to hand out some hints on how to identify a snowball event. He uses the development of tool-assisted agriculture as an example. The snowball pattern consists of three steps:
Discovery of a new technology with improved utility and performance: Humans discovered that sticks, rocks or animal fur, for example, could ease the agricultural process.
Deeper specialization for individual tasks: Noting the benefits of the new discovery, tools start becoming much more specialized for different tasks. Hunters would discover that rocks could be sharpened and tied to the end of a stick in order to provide an effective long-range attack. Gatherers discovered that plants could be arranged in the form of containers that allow them to carry more harvest. Explorers discovered that arranging tree logs together could provide means of transportation through rivers.
Increasingly collaborative systems of work and play: This evolution in productivity leads to increased collaboration between groups. For example, tribes where crops became the expertise could send the harvest to other groups who would specialize in hunting, in exchange for meat. These exchanges could travel through other groups who specialized in long distance traveling, etc…
By identifying this three-step pattern one can catch a snowball event. Matt mentions other, more modern examples: the web browser and the mobile phone.
Web Browser | Mobile Phone | |
| Opened a gate to virtually unlimited knowledge. | Armed you with a personal computer in your pocket. |
| Then came the web apps and services, which users visited for very specific purposes. | Mobile apps then were created with specific purposes and functionality. |
| Network APIs became popular encouraging collaboration between specialized web applications. | Automation tools and assistants make use of different apps to fulfill a user's command. |
Now, as you might've imagined, Mr Wood begins hypothesizing that Generative AI is the next snowball event. Convincingly, he argues that:
The discovery of LLMs gave us, for the first time, a system capable of communicating and "reasoning" in a much more human-like way.
Now, attempting to follow the pattern, we can see what the future of the technology holds:
We train domain-specific LLMs as experts in this very narrow topic. These perform better than the foundational models, but their knowledge is limited to this domain. We call them: "agents".
We build multi-agent systems that plan and make use of domain expert agents to fulfill a complex task.
As you can see, in fact GenAI fits the snowball pattern very accurately. Matt Woods finishes by introducing the multi-agent capabilities present in Amazon Bedrock.
I personally liked this talk a lot because it traces a roadmap for, not only your projects, but the industry in general.
Methods for Guiding Large Language Models
Another great talk was given by Aleena Taufiq, Senior AI product manager at Verizon. This talk was fun, informative and straight to the point, ideal if you are starting your GenAI onboarding. From the very beginning Aleena kept the good spirits up by sharing her passion for pizza, cookies and cats, topics that would recur throughout her presentation!
The topic of the talk was simple: how do I choose the level of guidance I need for my LLM? More specifically, Ms. Taufiq gave a general framework to choose between one of:
Prompt Engineering
Retrieval Augmented Generation
Fine Tuning
Prompt Engineering
Aleena starts by introducing the audience to prompt engineering, a technique where you drive the response of the LLM to desired results by carefully crafting the prompt given to the model. She shares some examples of well known prompt engineering techniques, such as:
Zero shot prompting: give the AI a task without providing any examples or context.
One/Few shot prompting: provide the AI one or more examples within the prompt to guide it to your desired output.
Chain of Thought prompting: ask the AI to provide its thought process/reasoning in order to improve the output.
Iterative prompting: iteratively refine your prompt after you get outputs to guide the model to your desired output.
Prompt chaining: take a complex task and break it into multiple prompts and stitch the outputs together.
While simple, prompt engineering may be very effective for some applications. As the speaker continues to show, there are prompts and cons to this techniques:
Pros:
Simple to implement
No need for expertise
No need to be technical
Cost effective
Flexible
Cons:
Limited to original training data
Doesn't always work
Inconsistent
RAG (Retrieval Augmented Generation)
When prompting is not enough, RAGs can provide more guidance to the AI. This technique, as Aleena describes, consists of dynamically feeding relevant information as context to the LLM from an external source of knowledge (like documents, websites, wikis, etc…). Generally the process is simply:
Receive the user query
Find the most relevant pieces of information for the query
Feed that information in the AI context, along with the query.
This technique is, of course, more complex that simple prompting, but has its benefits to it:
Pros:
Allows for retrieval of latest information
Improves accuracy in some cases
Provides relevant responses
Simple and customized to your needs
Reduced hallucinations
Interpretability improvement
Cost effective - no need for a large amount of labeled data and resources
Adaptable to a wide variety of models
Cons:
Dependent on the external data source
Resource intensive
Implementation and integration can be complex
Focused on information retrieval but not domain specialized
Accuracy varies based on the domain/task
Costs associated with embedding and retrieval
This technique provides a good balance between technical proficiency requirements and customization.
Fine Tuning
Finally Ms. Taufiq explores one last technique for AI guidance: model fine tuning.
Fine tuning takes a generalist model and turns it into a specialist. It takes a language model and trains it on new, domain specific data to generate very personalized responses.
Aleena Taufiq
Fine tuning effectively modified the weights in a language model, teaching it to become an expert in your specific domain. She mentions 5 types of fine-tuning:
Supervised fine tuning: training the LLM on a dataset that contains labels.
Reinforcement learning with human feedback (RLHF): using human feedback to train the LLM by prompting the LLM, having a human asses the response and signal the model to refine the output.
Parameter efficient fine tuning (PEFT): reduces the number of parameters to be updated during the fine tuning through a smaller dataset, simpler model or low-rank adapters (LoRA).
Transfer learning: LLM is fine tuned on task specific data, adapting its knowledge to the new task.
Task-specific fine tuning: adapting the parameters to match a target task and enhancing the LLM's performance related to a specific domain.
Indubitely, this requires more expertise, but also can bring more benefits:
Pros:
Taylor an LLM to your specific needs.
Enhances user experience with higher performance and customization.
Increases robustness.
Cons:
Expensive
Requires large amounts of quality data.
High computation needs
High level of technical skills
Potential catastrophic forgetting.
As explained by Aleena, there's no one-fits-all solution and each use case should be analyzed independently. It is recommended to start with prompt engineering first, and then evaluate other options. Try to ask yourself: Is your ideal solution scalable? Is cost a factor? Do you have the technical expertise to implement a RAG or fine-tune? Do you have large amounts of data? Is your data of high quality? Have you compared multiple models? Have you considered RAG and fine tuning hybrids?
Thanks Aleena for this framework!
Ai4 Closing Remarks
The Ai4 2024 conference was not only eye opening, but fun! We got to meet a lot of awesome people all excited about AI. Honestly, there were so many great talks that it was hard to pick only three of them.
In general, most of the technologies found at Ai4 could be categorized in one of the following:
RAG solutions: Companies implementing their RAG infrastructure, or SaaS offering RAG services.
Automated workflows: Companies offering cloud platforms where you can use AI agents to automate business workflows.
Personal / Code assistants: Companies leveraging AI to enhance their productivity with GenAI.
Data Infrastructure: Cloud solutions for data hosting, privacy and governance.
Hardware Infrastructure: Companies renting GPUs for AI training and inference.
Labeling / Dataset Generation: Companies offering labeling services, synthetic dataset generation and/or dataset management platforms.
We are very excited about what the future of AI awaits and to be a part of it. We are looking forward to next year!
Comments