This Post Is Not About Artificial Intelligence

I love that when I post something as mundane as my coffee drinking on LinkedIn I can muster upwards of 50 reactions to it, but when I post something critical of artificial intelligence I get one or two. I regularly see folks chant that if you aren’t talking about AI, nobody will read it. I don’t give a shit about AI because it doesn’t benefit me in any way (currently). During the last AI wave I pretended to give a shit because I was dependent on being part of the mainstream API chorus. While everyone is hyper focused on AI because their lives (jobs) depend on it, I am exploring what is needed to make sense of the API sprawl at scale. I am not sure what the answer is, but spoiler alert, it won’t be artificial intelligence.

Before you dismiss me completely I actually believe in the value of AI as I understand it. I think it will bring many incremental improvements to how we do APIs and the resulting applications, integrations, and AI that depend on APIs. I am not anti-artificial intelligence, I just don’t believe all the hype occurring in this moment-—it is just trojan horses to get access to your bits and bytes, while also shifting the labor debate. However, I do like bitching and moaning about all of this off on the sidelines, and I wanted to push myself to write a potentially useful article (at least for me) about artificial intelligence and APIs, but bury it under a title that will guarantee it does not get read. I mean, I’d rather write a useful article that doesn’t get read than a useless article that gets a shit ton of page views because it hits on all the right notes for the moment.

I would say that the argument for APIs in an AI-bent world is really a lot of what API believers, hypermedia, and other smart folks have been saying for a long time. I would not say that you have to have well-designed APIs to be successful with AI, but it helps immensely. However, you do have to have the surface area of your API landscape well-defined and all the dots connected in a usable way. You also need to have heavily invested in API training and education for your teams to ensure they have the ability to rapidly deliver new APIs and iterate on existing ones. You should have a robust hypermedia and linked data strategy, utilizing JSON-LD, ALPS, and other robust API patterns. Otherwise your AI is just going to suck. I am sure you can effectively get your AI to provide common solutions, but once things get complicated (which it will), if your API dots are not connected—-this is when things will break down. Crap in crap out. If you haven’t been heavily investing in your API game it is unlikely you are going to find meaningful and deep success with producing effective large language models any time soon.

I have no doubt that today’s vector databases and open or proprietary frameworks for delivering small or large language models are robust and becoming even more robust over time. However, if you aren’t able to effectively wire this AI architecture to rich API resources, capabilities, and experiences, your AI will stumble. This is why I am hyper focused on API discovery and governance. If you do not have your API landscape mapped in 2024, and aren’t already in motion with standardizing this landscape using governance, your AI efforts will fall short. If your teams do not have a standardized definition of what an API is and what the API lifecycle looks like, your AI investments will need to be substantial. I know we are all counting on AI coming in and being able to connect all the dots for us, but it won’t. AI will get us further than we can do on our own, but there will still be a lot of domain expertise required to connect the dots, and those who are lazy or looking for shortcuts are the ones who will get swindled in the current AI frenzy that has captured everyone’s imagination.

I have spent months immersed in the APIs.json and OpenAPI for top 100 API providers and even with the straight forward well known APIs like Stripe and Twilio, there are so many nuances in how OpenAPIs are incomplete, and how Spectral rules need to be tweaked to make sense of the surface area. We aren’t even getting into the specific nuance of the industries in which these APIs operate yet. I am just producing APIs.json of their operations, OpenAPIs of each API, and then tag both to help connect dots regarding the digital resources available within. While doing this work you quickly see how much metadata work is necessary to connect the dots across a single API producer’s APIs, but also across many providers with specific tags like payments, messaging, and other digital capabilities. I think that artificial intelligence can help out a lot here on the ground floor of our API operations, but I just think there are too many humanisms codified into these digital gears and you won’t be able to fix it all with AI.

There will be folks who push back on the need for APIs in the API discussion, and that you will be able to connect directly to databases and other file systems. True to a point, but much of the data you will need is locked up in databases and on file systems that aren’t always accessible except via API. Additionally, I predict that AI will slam right up against copyright laws, and if you don’t have an API layer serving up provenance, your legal bills will go through the roof. Most folks will believe that copyright laws will change, but there is little evidence to support this. I think you are going to have to have the receipt for where you obtained or mined your raw digital resources, and to do this in real-time at scale you will need APIs. You will need APIs to power AI. You will need APIs to consume AI. You will need APIs to regulate AI. I’ve seen only moderate levels of adoption to all the healthy API practices like REST, hypermedia, and other design patterns over the years. Most enterprise organizations are still getting a handle on doing APIs in small and incremental ways, and have a long ways to go before they get a handle on the API thing at scale. But, for some reason they believe that AI will save their souls.

I am barely interested in providing what is needed to make AI useful. I am not interested in doing a startup, or even evangelizing startups (sorry). I am keenly interested in perpetually assessing in an honest way what is needed to do APIs well at scale, which I think secondarily will accomplish making AI incrementally more useful. To help ensure I am not (completely) full of shit, I will be carving out some time to cleanup, tag, and index API Evangelist using a vector database, and try to develop my own small language model, and improve my search, and possibly even my own assistant. I am not looking to accomplish any specific AI outcome except for learning more about how this all works, and how APIs are involved. I have 14 years of content, data, but also a wealth of APIs.json, and OpenAPI artifacts I can ingest. I am purposefully disavowing SEO and other traffic driving efforts I’ve historically invested in and will be very cautious when I publish anything to API Evangelist, especially specific datasets I have been curating. So it makes sense for me to develop my own AI, right? We’ll see. As with the rest of this API Evangelist journey, it is all about learning more about the inner workings of the rising API operations around us, but also the artificial intelligence models we are building, while also tuning into the business and politics of the AI shit show.