×
<p>Google IO is an annual developer</p>

Exploring Google's Latest Language Model: PaLM2

Google IO Google IO is an annual developer conference hosted by Google, where the company showcases its latest advancements and product releases. This year’s conference took place in early May. One of the major highlights of Google IO 2023 was the announcement of significant advancements in Google’s Generative AI programs.

It’s of note that much of this blogpost is based of two technical reports;

  • 1). The GPT4 Tech report found here: https://arxiv.org/pdf/2303.08774.pdf
  • 2). The PaLM2 Tech report found here: https://ai.google/static/documents/palm2techreport.pdf
  • PaLM2 – a competitor to GPT4 announced

     

    Google recently released a technical report on PaLM2, which is generating buzz in the machine learning community.  PaLM2 is being touted as a competitor to GPT4, offering a different set of strengths and weaknesses. While it may be less “smart” overall than GPT4, PaLM2 performs better in certain areas. 

    One notable application of this technology is that it powers Bard, an AI language model used in Google’s search engine. With PaLM2, Bard’s inference speed is now approximately 10x faster than GPT4 for the same prompt.

    Multiple Model Sizes offered

     

    PaLM2 comes in a wide range of sizes, including Gekko, Otter, Bison, and Unicorn. Gekko is the smallest of these models, and it is designed to work on phones, even for offline use. This is an impressive feat, given that language models typically require significant computational resources to function effectively.

    The larger models, such as Otter, Bison, and Unicorn, are likely to be more powerful, but they may also require more resources to run. Nevertheless, the availability of a range of PaLM2 sizes means that developers can choose the model that best fits their specific use case, whether they need a smaller model for mobile devices or a more powerful one for more complex tasks. This flexibility is a significant advantage of PaLM2 over other language models.

    Multilingual ability

    PaLM2’s multilingual training data is a key factor in its differentiation from GPT4. Unlike GPT4, which was trained primarily on English language data, PaLM2 performs equally well across multiple languages. In fact, PaLM2 even outperforms Google Translate in specific language pairs, such as Chinese to English and English to German. Additionally, PaLM2 has passed mastery exams in several languages, including Chinese, Japanese, French, German, and Italian. However, it’s worth noting that PaLM2 is currently only available for use in Bard, which currently supports Japanese, English (US), and Korean.

    Pushing the boundaries with context-length

    Google has indicated that they are close to a significant breakthrough in terms of the context length its AI can work with. In particular, they’ve found that it’s possible to increase the context length of a model without negatively impacting its performance on generic benchmarks.

    The context length refers to the amount of text that a model can take in before it starts to forget the conversation and its own outputs. This finding contrasts with a recent study that showed a drop-off in performance with an increase in context length. If Google’s approach holds up to further testing, it could represent a significant advance in the development of language models and help improve their ability to understand and respond to longer and more complex text inputs.

    PaLM2 excelling in directed scientific use cases

    Google has also developed a language model specifically for the medical domain called Med-PaLM2. This model achieved a remarkable score of 85.4% on the United States Medical Licensing Exam (USMLE), which is significantly higher than any other model.

    Acceleration Risk is becoming a growing issue. 

     

    Google also teased its GPT5 competitor, Gemini. Gemini is Google Deepmind’s next-generation foundation model currently in training. It’s designed to be multi-model, efficient at tool and API integrations, and enable future innovations like memory and planning. 

    Concerns have been raised about the potential for more powerful models to create and act on long-term plans, which could lead to power-seeking AI. Despite this, Google has not publicly identified broad AI risks in its tech report, in contrast to other organisations. Google is bringing together its Brain and Deep-mind teams to accelerate progress, despite concerns of an “acceleration risk” in the GPT4 tech report. Gemini is set to be trained on TPU v5 chips, whereas PaLM2 was trained in the TPU v4 chips. 

    Overall, Google has been shown to be closing the gap between itself and Microsoft in what is showing worrying signs of becoming an AI arms-race.  Institutions and governments alike should be conscious of the acceleration of innovation within the space and need to start thinking of plans to extract the benefits of the emerging technology, whilst minimising, or neutralising the risks.