The rise of API-powered NLP apps: hype cycle, or a new disruptive industry?

published on 08 March 2023
3568615849_A_robot_entrepreneur_listing_his_ai_company_at_the_nasdaq_stock_exchange__concept_art__matte_paintin-sk9y5

Large Language Models (LLMs) have come a long way in recent years. From fluent dialogue generation, to text summarisation, and article generation, language models have made it extremely easy for anyone to build an NLP-powered product. As a result, hundreds of apps have been popping up every day, predominantly relying on APIs such as OpenAICohere, or Stable Diffusion.

Looking at these developments, one might wonder: what is the disruptive potential of such apps? Are they poised to deliver transformative results to all industries? Or, will their impact be limited to certain narrow use cases?

Furthermore, what challenges do developers and business owners need to be aware of in order to make a lasting impact in this space?

The rise of LLM-centred product development

Large Language Models (LLMs) have seen significant advancements in the last year, primarily due to the development of techniques that better align them to human preferences. This has resulted in impressive capacity for generating fluent text in a wide range of styles, and for different purposes, with significantly greater precision, detail, and coherence than what was previously possible.

The capacity of LLMs to follow instructions, and to learn from examples presented in their context, has made it possible to tackle virtually any NLP task with an LLM, that is, at least in principle. All that is needed is a carefully constructed prompt that is able to extract the required functionality out of the LLM. The LLM itself can be conveniently accessed through a simple API call.

The progress in LLMs, as well as their general availability, have led to an explosion of LLM-based apps, targeting diverse use cases. From blog post generation, to generation of email responses, summarisation of articles and meetings, fluent dialogue conversations, or code generation.

Most of these apps focus on a narrow user workflow, essentially abstracting away certain functionalities of the underlying LLM. They typically operate by charging a premium on top of API fees.

Challenges with API-centric AI product development

When it comes to solving tangible problems that users are willing to pay for, some of the weaknesses of the API LLM approach quickly come to light. Let’s explore a few of those.

Challenge 1: No specialisation

classroom-vw3vi

LLMs are general-purpose models trained on the whole of the internet. For general-purpose tasks that only require creative suggestions, this is typically fine: for example, suggesting a title for a blog post.

Many real-world problems, however, require significant or full levels of automation. Often, the problem lies within a narrow domain, such as extracting information from biomedical or legal articles. The lack of specialisation of the LLM is, therefore, unlikely to fully match user expectations out-of-the-box in these narrow domains, where LLM outputs would require additional manual checking to ensure they meet quality expectations.

This problem can be alleviated to a certain extent: for example, prompt engineering, in-context learning, and model fine-tuning might help to better align the base LLM with user expectations. However, this is certainly not a quick and easy task, since it requires deep domain knowledge, carefully constructed human feedback datasets, as well as deep understanding of the underlying tech. These are issues that are unlikely to be resolved by simply calling an API, but would require a more fundamental approach to the problem.

API-based apps that fail to deliver expected automation levels risk becoming just another tool that’s fun to play around with, but doesn’t get integrated into existing workflows, hence generate revenue, or make a lasting impact.

Challenge 2: No differentiation

undefined_A_picture_of_a_human_business_owner_that_owns_a_friendly_humanoid_robot_as_his_servant__They_are_sit-17etk

LLM APIs have significantly sped up the process of AI prototyping. A prototype that might have taken years to develop can now be built within a week. All that is needed is a pretty interface that encapsulates the desired user workflow: the API does the rest of the heavy lifting.

The speed and flexibility the APIs provide is certainly amazing: we are seeing so many creative applications of LLMs. However, on a more fundamental level, it becomes uncertain which solutions really have something that significantly separates them from the baseline LLM, which is accessible by anyone. In other words, is there something more fundamental under the hood, or are customers essentially paying to access a polished prompt behind a pretty interface?

While certainly there will be areas where this approach will work, and will even lead to user traction, the question is what happens when 5 similar apps come out next week? Is it really possible to build a sustainable business relying purely on such APIs?

Ultimately, the winners that come out successful from the LLM app race are likely to be the ones that manage to quickly capture user traction, understand where the fundamental value lies in, and capitalise the traction to build custom solutions that set them apart from the competition.

Challenge 3: Lack of ownership

2210127940_A_picture_of_a_human_business_owner_that_owns_a_friendly_humanoid_robot_as_his_servant__They_are_sit-baz0w

The reliance on LLM APIs also creates a number of business risks. The API provider could, at any point in time, decide to make changes that could cause a significant business impact. For example, they could decide to change the price of the API, change the fee structure, the terms and conditions, or even make changes to the underlying model.

Many users might also have concerns regarding data regulation and security, since their data is handed over to third parties.

This could potentially be a recipe for disaster. What happens to your app if an API no longer works as expected, or if there is an outage? Since you don’t own the tech, you have no backup option. You could switch to a different API provider, but would everything work exactly as before? Furthermore, how can you establish trust with your users that the data is handled properly, and will be handled properly in the future?

The lack of control and transparency with APIs is certainly an aspect to seriously consider, especially if your whole business case is built around them.

Conclusion and Outlook

We are witnessing a technology revolution driven by the availability of powerful AI tools. We're still at the tip of the iceberg in terms of possible applications, and there are many unknowns regarding how the next 6-12 months will look like. One thing is certain: LLMs are here to stay, and they will have a significant impact on our society over the coming years.

LLM APIs certainly have a role to play: they provide an interface to powerful LLMs, that make it extremely easy for anyone to build AI products, or to add some AI features to an existing product, all without having to invest in any infrastructure or development. On the short term, many of these products are likely to gather some traction. However, the challenges outlined in this article make it difficult to go beyond the prototype stage on the long term, as user expectations rise and competition increases.

The winning companies in the space are likely to be the ones who: (1) Capitalise APIs for fast prototype iteration, data and feedback collection, while determining what’s the true value within a niche; (2) Build proprietary tech as quickly as possible: curate custom datasets, develop and train custom tech that converts the prototype to a robust solution that solves a real-world problem.

Thanks for reading! If you are looking for state-of-the-art expertise in Natural Language Processing, you should check out our services at The Global NLP Lab.

Read more