More and more, AI is making its way back to the headlines. OpenAI recently updated its GPT-4 Turbo model to include data up to December of 2023, enhancing its relevance and capabilities without much publicity. This update aims to address the model's "laziness" and improve performance. Additionally, a study by Salus Security demonstrated GPT-4's potential and limitations in auditing smart contracts, indicating a need for a combined approach with traditional auditing methods for better security.
Meanwhile, Meta's introduction of V-JEPA, a vision model aiming for Advanced Machine Intelligence (AMI), represents a massive leap in AI learning efficiency by focusing on the interactions of objects and predicting unseen events in a non-generative manner.
OpenAI Updates GPT-4 Turbo with Latest Data
OpenAI has discreetly updated the training dataset for its cutting-edge model, GPT-4 Turbo, extending its data relevance up to December of 2023, according to its website. This enhancement, which was not broadly publicized at all, positions GPT-4 Turbo as OpenAI's most contemporary offering, distinguishing it from the freely available GPT-3.5 model that contains data up until January of 2022. The update aims to mitigate “laziness” in AI models, where the model fails to fulfill the tasks it's asked to perform.
During OpenAI's DevDay in November last year, the company introduced new models and developer products, announcing that GPT-4 Turbo would include current events data up until April 2023. With the latest update, the model now benefits from an additional eight months of data, despite no official announcement being made in the most recent update notice on Jan. 25.
The update has sparked discussions on the OpenAI developer forum, with users reporting mixed experiences regarding the model's knowledge update. While some users noted that the model still references data up to April of 2023, others have shared positive outcomes, with the model acknowledging information updated as of December 2023.
This update coincides closely with OpenAI's introduction of its advanced text-to-video model, Sora, which has generated a lot of excitement on social media for its ability to create realistic movie-like scenes in up to 1080p resolution. Despite its impressive capabilities, Sora is not yet available for public release.
Moreover, a recent report by The New York Times highlighted OpenAI's large valuation increase to $80 billion in its latest funding round. The company is actively engaging with global investors and governments to secure funding for the development of in-house AI chips.
GPT-4 and Smart Contracts
In other ChatGPT news, a recent study by Salus Security, a blockchain security firm, highlighted the capabilities and limitations of GPT-4 in the context of smart contract auditing. The research demonstrated that while GPT-4 shows promise in parsing code and identifying potential vulnerabilities within smart contracts, it still falls short as a standalone security auditing tool. The Salus team's investigation employed a dataset of 35 smart contracts, encompassing 732 vulnerabilities, to evaluate the AI's performance across seven common types of vulnerabilities.
The findings revealed that GPT-4 is pretty good at detecting true vulnerabilities, achieving over 80% precision in tests. However, the AI struggled with a high rate of false negatives, evidenced by a recall rate as low as 11%. This low recall rate underscores the AI's limitations in reliably identifying security weaknesses, with its highest accuracy in vulnerability detection capped at 33%.
Given these results, the researchers caution against relying solely on GPT-4 for smart contract audits. While the AI can offer valuable insights and assist in the auditing process, it cannot yet replace the thoroughness and expertise provided by professional auditing tools and seasoned auditors.
The study advocates for a combined approach, leveraging GPT-4's strengths in code parsing and vulnerability identification alongside traditional auditing methods to improve the effectiveness and efficiency of smart contract audits. This integrated strategy aims to harness the best of both worlds, utilizing AI's capabilities to enhance the auditing process while mitigating its current limitations through human expertise and advanced tools.
Meta's Leap Towards Advanced Machine Intelligence
Meanwhile, Meta has unveiled V-JEPA, a pioneering vision model, taking a big step towards achieving Advanced Machine Intelligence (AMI), a concept championed by Meta's Chief AI Scientist Yann LeCun. This model represents a departure from traditional training methods for AI, which typically require massive datasets of video examples, image encoders, textual information, or human annotations to learn even a single concept. These methods are not only resource-intensive but are certainly inefficient.
V-JEPA, which stands for Joint Embedding Predictive Architectures, offers a much more efficient learning process by focusing on understanding the interactions of objects in the physical world in a way very similar to human learning. Humans have the ability to predict and understand events even when part of the information is missing, like predicting what happens when someone walks behind a screen and reappears on the other side. V-JEPA emulates this process by predicting the missing or masked parts of a video, but instead of generating these parts pixel by pixel, it provides an abstract description of the events in the unseen segments.
This model uses a non-generative approach, contrasting with generative models that recreate missing video segments in detail. It learns through self-supervised training on a diverse range of videos, gaining insights into the workings of the physical world without reliance on labeled data. This method of learning allows V-JEPA to gather multiple skills and concepts, broadening its understanding and reasoning capabilities.
A standout feature of V-JEPA is its proficiency in "frozen evaluations." Once the model has been trained on a vast collection of unlabeled data, its encoder and predictor can be applied to new tasks without any additional training. This means that, unlike traditional models that require extensive retraining for new skills, V-JEPA can adapt to new tasks with minimal labeled data and slight adjustments to task-specific parameters. This efficiency makes V-JEPA a very exciting development for embodied AI, promising advancements in machines' contextual awareness and their ability to make sequential decisions based on their physical environment.
Meta's research points out the potential of V-JEPA to revolutionize how machines learn and interact with the world, moving closer to a future where AI can perform more generalized reasoning and planning tasks with a deeper, more grounded understanding of their surroundings. This step towards more efficient and effective AI learning models could greatly impact many fields, including robotics, autonomous vehicles, and interactive AI systems, paving the way for even more intelligent and capable machines.