We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Language Model Zen

00:00

Formal Metadata

Title
Language Model Zen
Title of Series
Number of Parts
141
Author
Contributors
License
CC Attribution - NonCommercial - ShareAlike 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Beautiful is better than ugly. The frontier of AI Language Models awaits exploration. We, Pythonistas, face choices on how to use these tools. Advanced models like GPT-4, BARD, and LLaMa generate human-like responses. The nature of Language Models is fear, But tools like TransformerLens show The Way. Understanding The Model is possible. The nature of Language Models is excitement. Using them out of the box is one option. Prompt engineering is another. ChatGPT plugins and LangChain offer a third choice. Fine-tuning them presents a fourth. Training them from scratch is the fifth option. Not using them at all is the final option. It may be safer. The output for one LM is the prompt for another. While openai is an excellent library, and LangChain composes language models and utilities. GPT's plugin system also composes language models and utilities, and There should be one-- and preferably only one --obvious way to do it.
Programming languageData modelCoefficientProgramming languageMathematical modelWaveOpen sourceArtificial neural networkPosition operatorRandom matrixElectric generatorComputer animationLecture/Conference
Vulnerability (computing)CodeCognitionArtificial neural networkDigital signalWebsite2 (number)BuildingPhysical systemMathematical modelBitSoftwareVulnerability (computing)AbstractionFunction (mathematics)Mathematical optimizationLatent heatRow (database)Information securityInformationForm (programming)Term (mathematics)Different (Kate Ryan album)Moment (mathematics)DatabaseSoftware developerElectric generatorSemantics (computer science)Presentation of a groupMobile appWaveVector spaceHybrid computerLibrary (computing)Computer hardwareProcess (computing)State of matterArtificial neural networkCodeMultiplication signComputer programmingKey (cryptography)Programming languageOpen sourceTheory of relativityGoodness of fitFraction (mathematics)Information retrievalPoint (geometry)Software engineeringInstance (computer science)Order (biology)Installation artGraphical user interfaceOperating systemEnterprise architectureDigitizingoutputCognitionHierarchyComputer filePlug-in (computing)Chemical equationData structureData storage deviceContext awarenessRegular graphSimilarity (geometry)Online chatComputing platformView (database)DataflowSubject indexingChainSystem administratorAnalogyStandard deviationCodeOpen setSlide ruleData modelCore dumpRight angleSpherical capComputer animationLecture/Conference
Plug-in (computing)Moment (mathematics)DataflowPrice indexProgramming languageRight angleMathematical modelLibrary (computing)Level (video gaming)Task (computing)SoftwareStrategy gameMobile appAlgorithm10 (number)Software engineeringEntire functionCuboidNatural languageDemo (music)Process (computing)Vector spaceElectric generatorFunction (mathematics)Open sourceProjective planeStandard errorTunisOrder (biology)Software frameworkNumberOcean currentProblemorientierte ProgrammierspracheTwitterPoint (geometry)Angle of attackArithmetic progressionDifferent (Kate Ryan album)DampingMultilaterationStandard deviationAbstractionData managementBitBit ratePhysical systemChaos (cosmogony)Front and back endsChainCodeSingle-precision floating-point formatChatterbotSystem calloutputInformation retrievalInformationComputer architectureAdditionWritingComputer clusterSelf-organizationCartesian coordinate systemDebuggerWorkstation <Musikinstrument>Component-based software engineeringFacebookSpacetimeDataflowQueue (abstract data type)Data storage deviceContext awarenessSubject indexingComputer animationLecture/Conference
Demo (music)WindowView (database)Bit rateInformation retrievalPlug-in (computing)Repository (publishing)Software developerProduct (business)LoginData typeMessage passingGroup actionIdentity managementRange (statistics)MetreLimit (category theory)Degree (graph theory)Bookmark (World Wide Web)ChainAsynchronous Transfer ModeWeb pageAreaRaw image formatLatin squareCase moddingInternetworkingCompilation albumModal logicInformationChainElectric generatorDemo (music)DatabasePoint (geometry)Adaptive behaviorLibrary (computing)MereologySoftware developerINTEGRALMechanism designMobile appType theoryPattern languageChatterbotWave packetFactory (trading post)Order (biology)Plug-in (computing)Programming language2 (number)Set (mathematics)Mathematical modelCASE <Informatik>AbstractionTerm (mathematics)Right angleNumberOnline chatSlide ruleSoftwareVector spaceData storage deviceDependent and independent variablesComputer programmingFunction (mathematics)Degree (graph theory)Level (video gaming)Group actionVisualization (computer graphics)MultilaterationMessage passingLecture/ConferenceComputer animationXMLProgram flowchart
InformationFood energySystem programmingFrequencyLevel (video gaming)State of matterLimit (category theory)Right angleDatabasePlug-in (computing)Traffic reportingDemo (music)CausalityComputer animation
Likelihood-ratio testArtificial intelligenceCycle (graph theory)Plateau's problemSoftware developerMathematical modelCycle (graph theory)Grass (card game)Expected valueComputer animationLecture/Conference
Expert systemChainRight angleBitSlide ruleProcess (computing)Perfect groupLecture/Conference
Variety (linguistics)Artificial intelligenceCycle (graph theory)Software developerProgramming languageChainMoment (mathematics)ImplementationCodeSoftware developerMobile appClient (computing)Mathematical modelEntire functionData conversionBuildingDirection (geometry)Computer animationLecture/Conference
Virtual machineProgramming languageMathematical modelDisk read-and-write headMereologyAbsolute valueLecture/Conference
InformationFood energySystem programmingLevel (video gaming)FrequencyState of matterLimit (category theory)SoftwareTransformation (genetics)Software developerModule (mathematics)Process (computing)Open setMathematical modelLibrary (computing)NumberLine (geometry)Airy functionInterpreter (computing)Term (mathematics)Computer animation
Mathematical modelEvoluteLecture/Conference
Inheritance (object-oriented programming)WeightMathematical modelManifoldView (database)CASE <Informatik>EvoluteSlide ruleMathematical modelOpen sourceShared memoryMobile app1 (number)Electric generatorProduct (business)WavePredictabilityDifferent (Kate Ryan album)Order (biology)Software developerFigurate numberVolume (thermodynamics)ManifoldLecture/ConferenceComputer animation
Default (computer science)Software developerMereologyOcean currentMobile appLatent heatSoftware frameworkBoiling pointGame controllerGUI widgetObject-oriented programmingOnline chatInterface (computing)QuicksortDesign by contractParameter (computer programming)Context awarenessPoint (geometry)INTEGRALProcess (computing)ChainThermal expansionLine (geometry)Lecture/ConferenceComputer animation
Transcript: English(auto-generated)
Today, I want to talk to you all about language model Zen. Specifically, I'm going to be looking at how the open source community is best placed to help the new wave of generative AI go well.
We've had artificial intelligence summers and winters, a second wave and a second winter. And then deep learning comes, and then generative AI comes, and here we are today. The question we're facing, is this
going to be another artificial intelligence wave, or is it going to be a tsunami? Let's dig in. I hope this talk to show you two different technologies. The first technology we'll be looking at is large language models.
We'll be looking at their present uses, their likely future uses, and their less likely future uses. And then we'll be looking a small amount at AGI, at their present uses, their likely future uses, and their less likely future uses. We'll be looking as well at the position of where
the open source community steps in, and how do we change the way that this next wave goes. How can we shape the future? I've got this wonderful quote here. It's from Jeffrey Hinton via ChatGPT. He says, within the realm of digital minds,
the alluring dance between perception and illusion weaves a captivating tapestry of cognition. Hallucinations, those beguiling mirages of the artificial psyche, stand as testament to the delicate balance between brilliance and vulnerability within our code-driven consciousness. All right, I'll ask this from a question here.
Raise your hand if you believe that Jeffrey Hinton said that. Very good. And raise your hand if you don't believe that Jeffrey Hinton said that. Excellent. Yes. So today we're talking about hallucinations. And we're going to be looking at how can we software
engineer when we have tools that only tell us the truth a fraction of the time. Building infrastructure around this problem of hallucinations takes time. It's worth looking back at the history of where Python has
come from historically in order to give us a grounding of how we can be better placed to ride the next wave of generative AI and how we can be best placed to shape what comes next. The key thing I want to show you with this slide, instead of talking through in great detail
the exact steps on the process of creating asynchronous coding in Python, I will instead make the point that we had these early libraries and we had these early abstractions and ways of looking at the problem. And it wasn't until many libraries later that we finally had a simpler, easier to use way
of asynchronously coding in Python. By analogy, I want to make the same idea present with generative AI models. I want to say that all of the software engineer that we've been working with so far, all of the software engineering
phrased around how these models work, how we compose them, and how we build with them, this is all going to be taking time where we have to get our first initial abstractions and then we gradually build to finding our optimum.
Looking at the second example here, I want to talk a little bit about the history of databases and open source. I had some excellent chatter here from the front few rows shortly before my talk, this is exactly what I want to hear. They were talking about different open source funding models and how open source has developed over time.
So with databases, before the 1970s, databases were this hierarchical file structure. And then, following the invention of the relational data model, this new mathematical concept, the relational data model, was heavily capitalized on by Oracle and IBM,
making DB2. And they hired an army of database admins, all of whom were there to create these enterprise installations really serving the needs of the large businesses that they had. Over time, we went from this heavy early capitalization
of databases, this gradual building up of new features within open source databases, such as MariaDB, formerly MySQL, and Postgres as well. And one thing I'm struck by when I go to conference after conference after conference
here in the Python community is how quickly we are innovating and building new things. For example, PG vector, or Postgres vector, adds similarity search to regular Postgres databases. And there are some instances where we do not need Pinecone, Weaviate, Milvus,
all these different kinds of vector databases because the old traditional ways have already patched to make the system work, and they have these other additional benefits. Note that I am not saying that hybrid searches, like Weaviate, for example, hybrid searches that
use information retrieval systems, like Google, combined with vector semantic search, these are still new, and these will still increase performance of our large language model apps. And so we see the benefit of how
open source has taken what was previously in a proprietary state and then kind of democratized that out. There are questions about security, and people are rightly worried about the security of large language models and how we can make sure that the outputs they generate are safe, honest, helpful, and harmless.
So there are some approaches from industry, such as the OpenCore model, favored by MongoDB. The idea behind the OpenCore model is that overwhelmingly, most of the software is open source, and then there are specific features. I'm saying nodding in the audience is good. There are specific features like plugins for auditing,
and these are known by a smaller team. You have security through obscurity. You have better security methods as well, and this means that you have the best of both worlds, where you have open source and you have proprietary software. So my question for you today is about, are these two models, over time,
the development of asynchronous software in Python, and over time, the development of open source databases to become the de facto industry standard, are these useful examples to think about how the future of generative AI will unfold?
Large language models, present, future, and unlikely future. OpenAI, when they released ChatGPT, has been described as having its so-called App Store moment. Through the creation of the ChatGPT plugins,
you create a platform, and developers can be selected for a program where they can be added to this platform, and they can develop within the walled garden. The metaphor they want to use is that the language model itself is your iPhone.
It's a hardware. It's a substrate upon which everything is built. And as developers, you give access to a particular website. So they have partners, including Instacart, partners, including Expedia, partners, including Khan Academy. So these normally work with individual websites
to search them, retrieve them, and then get the information out from their APIs. If we want to, we can build plugins. This could be the way forward. If we want to, we can use open source tooling. This could be the way forward.
What tools are there to build software engineering around large language models? How do we build a large language model app? I'm calling these LLM apps, and I'm happy for other names to be used, but I'm coining a term here today.
We have LangChain, we have LangFlow, we have Streamlit, we have Chainlit, and we have Llama Index. Let me walk through with you what each of these technologies can do. So LangChain is a technology that is built to enable composition of different forms of language
model. LangFlow tries to do the same thing, but with a graphical interface. So you have this idea of composing the output of one tool becomes the input for the next. Many people here will be familiar with the Unix philosophy, where you have small tools which play nicely
with operating system pipes, and where the output of one tool can easily be piped into the input for the next. Working with large language models is a little bit more complicated than working with the operating system pipe, because there is additional context that
needs to be passed in addition to the standard in, standard out, and standard error approaches of the 1970s. LangChain and LangFlow provide you with this way to compose language model calls. They're the first abstraction we have.
Streamlit, you will already be familiar with, is a framework designed for creating front-end applications, where you can surface your Python code or data science code for a broader audience. You will not probably be familiar with Chainlit.
Chainlit is the composition of Streamlit and LangChain. I'll show you guys a demo later on in this talk, because it's a great piece of software. It's engineered to give you this really easy, quick way to get started building the TypeScript front-end you need around your code. And it doesn't require any TypeScript yourself,
so it automates away a task which would traditionally require a second language. Lastly, lama-index is one way to create a vector store, which we'll be talking about a little bit later on. It talks about retrieval augmented generation and how you can take a document, you can store a document
to the vector, you can compare how similar your vector is to other vectors within that space, and then you can retrieve that, and then the language model can use that for the information. I'll take a look at that some more later on.
And so large language models in the present and in the future will most likely overcome the hallucination problem. Although there is a small chance that we do not overcome this hallucination problem, there is a small chance that these language models are forever
impeded and unable to actually complete the tasks that we had required them to do. This does seem unlikely, given the current trend and rate of progress. From GPT-1, from the Attention is All You Need paper in 2017, and GPT, to GPT-2, to GPT-3, and 4,
we can see how quickly the performance has increased from, it would seem, very little above traditional other techniques, right the way up to near human level intelligence. Large language models can be interacted with in a number of ways.
And as an open source ecosystem, it's our job to figure out where our engineering fits into this whole pipeline and where it fits into the whole process. Here is a way of thinking about it that I've been told is very useful by colleagues at work. This describes the entire process end to end of working with a large language model.
And this can give us an angle of attack when we're thinking about how we can work with these models to get the best performance out of the whole system overall. So you start off with a generation of data and pre-training. Right now, these are mostly closed off processes.
Right now, tools like Llama by Facebook, tools like Orca from Microsoft, tools like OP1, a textbook model, these have done most of the pre-training for us. It's very unlikely that we're going to be pre-training our own models from scratch. However, pre-training, broadly, makes it smart.
Fine tuning, broadly, points it at the task. And so this means that if we want to use a given large language model for an entirely different task, you can bake in a prompt by using pre-training. It's a very powerful way of working where you point it at the given task you want.
So it's very unlikely that we'll be looking at the pre-training step ourselves. But it might be in scope for the open source community to be looking at fine tuning our own models and regularly and reliably using our own models, which do not have the obligation of handing our data over to other organizations and companies.
You'll recall the keynote by Inez, which talked about a similar idea as well. After this, we have prompt engineering and software engineering. I've separated these two out because I want to draw an interesting distinction. Within Langchain, prompt engineering is all about zero-shot prompting and few-shot prompting
and many-shot prompting. This is how many examples do you give of your task being done in order to improve performance at it. It so turns out that even a small number of shot
examples of your task massively increases the performance of the model performing it. And within Langchain, there are tools and algorithms to select which examples out of your corpus of tens of thousands of examples is the most pertinent, salient, and relevant examples to use for your few-shot prompts.
So we can be creating better prompts with few-shot prompting, and we can be creating better prompts with standard approaches that we're learning, these meta-cognitive strategies. There's a paper from a few years ago by OpenAI. It's called Let's Think Step by Step.
And adding these little prompts, number one, trending on art station, best art ever, can really improve performance. An interesting trend here is that going from slight praise will increase the quality of the output somewhat,
and then strong praise, you're the best language model that's ever existed, and you're so good at every single piece of creative writing you ever do, this actually decreases performance. And so there's a really interesting tradeoff to find out exactly how much do you flatter the model in order to get the best out of it, which are very similar to any management problems you might face within your own companies as well.
Beyond prompt engineering, we've got software engineering. This is the realm and the domain of auto GPT, of chaos GPT, and hustle GPT. These models work because they combine standard software components, perhaps as a microservice architecture, perhaps as a monolith architecture.
You're using task queues, and you're using bits and bobs, pieces of different architecture, everything you learned in your systems design interview to create software around these large language models. So for example, auto GPT has an execution agent,
and it also has a task collection agent as well. So you can get more than one language model working in tandem, and you can engineer them together to create these relatively small-scoped software projects. Last and definitely not least, prompt. Just the task expressed in natural language
is very useful, and this is absolutely very much firmly within our domain. So the domain of what is possible for the Python community, where do we dig into this problem? Where do we get involved? Where do we build our frameworks and libraries around? Right now, prompting is very much in our hands. Software engineering, almost definitely.
Prompt engineering, quite probably. Fine-tuning, perhaps. Pre-training and generating the data. We software engineer with tools like LangChain. What can LangChain do? LangChain is a data-aware and agentic way
to compose large language model calls. You call the output of one model as the input to your next. LangChain provides components, and also, LangChain provides off-the-shelf chains. This is the box art, this is their marketing blurb. This is how they talk about the language
and the software that they've created. We'll dig a little bit more into my personal thoughts a bit later on, but this is a very powerful way and an initial way of getting started with building these large language model apps. And what does Chainlet do? Well, Chainlet is a Python and JavaScript library that integrates the chatbot front-end. Let's look at a demo of that now.
So here's one I prepared earlier. You can see that we have, this is a Streamlet-like app.
You can see that we have the message history of everything I've asked it previously, and you can see that we have the readme and the chat, and some additional settings up here to look at how this works. Chainlet is designed to be good at working with chains of thought and also to be good
at creating this really beautiful front-end interface so that instead of using lang chain, developing in a command line, you're always working with a visual chatbot-like output. So here you see you can ask questions and you have sub-steps. So you ask, what is the tallest mountain on earth?
And to improve performance, I've deliberately prompted this to give this meta-cognitive response. Let's think step-by-step. This meta-cognition improves performance and this step-by-step approach means it's more likely to get the right answer, Mount Everest. What was agreed at COP26?
So the reason why we're talking about what was agreed at COP26, there's an example from GPT plugins that we'll look at later today, and this is showing the other side of it, but what you'll notice is that it talks about the Glasgow Climate Pact, which aims to limit global warming to 1.5 degrees Celsius above pre-industrial levels.
Notice the exact language being used here. This is gonna come out later. And then below this, we ask, what is the Doha program of action?
Now, the correct answer here is that the Doha program of action is this institutional mechanism aimed at providing funding for less well-off countries, developing countries, to help pay for climate adaptions and mitigation. It was created in 2022. What you'll notice is that's not what the model's saying.
The reason why it's getting it wrong is because this happened in 2022, this happened after the training data was cut off, and then in order to fix this, you need to compose a vector store, or you need to compose some way of knowing what's going on in the world today
rather than what was going on in the world back in 2021. There are a number of libraries to give you this vector store, and indeed, LangChain is very ambitious. It tries to work with as many as it can. So in my various development for this talk, I've worked with ChromaDB, and I've also worked with another
asynchronous vector database called QDVAT. And quite frankly, I'm unfortunate to say that I found the integration between vector databases and LangChain and Chainlet to be quite underwhelming.
Right now, it does appear in the current community that with our first generation tools, we're finding that the abstractions are quite messy. We've built for so many use cases all at once that any one particular use case is not served well.
So as we figure out through our first generation, and we can take the learnings of this in terms of what software do people want to build, and what software do we want to make it easy for people to build, we will dive in and find maybe six, eight, 10 use cases of common types of tool that people will want to work with,
and then orchestrate that especially. As a second point, the LangChain library is built with this functional style for the most part, which means that Chainlet is somewhat limited because it has to use decorators for everything. It has to use the factory pattern for everything. It has to use some quite,
it has to use more complicated Python tools than it needs to use. I think there are simpler and better solutions available, and we can take the success of the first generation of these tools, and really use that to drive innovation across the second generation of these tools to improve and build off the learnings we have so far.
Second point I'll make is about this demo from GPT plugins. I also have it in my slides. Yeah, I'll use it in my slides. So one of the demos from GPT plugins,
they're looking at this human rights database, the UN database of the five annual reports, and then they're looking at, they're looking at the five most recent UN reports, and they're looking at the COPS 21 through 26. They're looking at the different things that were pledged. Notice there was a hallucination in this particular approach here,
and this causes problems. So hallucinations cause issues. They make Greta Thunberg sad. So lastly, I'll leave you guys with this question here. What infrastructure do we need to build to ensure that we are ready when the hallucination problem is fixed?
We have this hype cycle. We have foundation models here right at the very top of the hype cycle, the peak of inflated expectations, and we want to see what they can actually be used for. We want to see how we can actually engineer what comes next.
Thank you very much. Okay, I'll ask if you have any questions. Otherwise, I have a few slides that I can come back to.
All right, would everyone like to see the remaining slides? Perfect. So let's talk a little bit more about how this process works. We have this idea of LangChain and the way that they work together.
LangChain is very ambitious, and Chainlet's fairly straightforward, whereas the APIs are easiest to work with directly. So this is just personal experience about how this has worked. What would I want to see in the community? What's my answer to this question of insight here? Well, I'll leave this question a moment to breathe,
just so you can have a think about this yourself, so you can think about what you would want the answer to be yourself. How can we engineer a great developer experience to building large language model apps in Python? So here, building a great developer experience would probably involve this experience
where we're working with large language models as the most basic kind of approach. I feel that recently, LangChain has a problem where it's using a thin client over everything, and instead of just working with the API directly, you then have the additional problem where you're faced with trying to
understand their entire code base and understand how they've implemented everything, so that currently, at present, they could be stronger with their implementation hiding and trying to improve how that works overall. But I'll open the floor. What do we think, what do people think would create a great developer experience
to building these apps? So thank you for your excellent talk and the conversation earlier. When you showed where the chat GPT, the large language model was hallucinating, for a layperson, I think it's difficult because I kind of feel like I trust the machine.
I see it and I parse it in my head, and I'm like, okay, that sounds about right. My question is, is there tooling around explanatory AI where it's like this part I'm not quite sure about? Yes, absolutely. I actually had a chat between the creation of the blurb for this talk,
the summary for this talk, and the actual development of the software of the talk itself. I had a chat with one of the founders of Transformer Lens, and he asked me very kindly to not show his library at this talk, so I'm not going to. But there are indeed mechanistic interpretability approaches. Then there are hopes that you can break down and understand how these models are thinking and working,
and we can think of these models as being sequential. So you have this deep learning pipeline where you can think of one layer being a transformation followed by another transformation followed by another transformation. You might have this natural sequential processing where you can figure out what's going on up to layer 10, and then you can figure out what's going on
with the rest of the network, or you can figure out what's going on with up to layer 20, and then figure out what's going on with the rest of the network. So there are lots of interesting approaches. Indeed also, OpenAI's research looking at using GPT-4 to interpret GPT-2 is a novel approach. It's currently underperforming in terms of the literal numbers,
but the idea itself is ambitious in scope, and potentially could lead to great impact down the line. Does that answer your question? Yes. Okay, so thank you for your talk. We all remember that the GPT model started as next token predictors, and then they were fine-tuned
to be these helpful assistants, and that all worked spectacularly. Do you think we are underestimating currently the capability of future agency from this evolution of, from the next evolution of these models? And by agency, I mean the ability to pursue goals
that the user did not specifically put in. Yep. So the question about forecasting is an excellent one. I've had this slide in case anyone asked that question. So what do you mean when you say underestimating? Like, if you're looking at the market figures here.
Manifold definitely is not underestimating. I mean the AI community, the more generic ones, community. So I think the problem with prediction markets right now is that they're undercapitalized, and you have many people not involved in it. And so what you want to do is you want to try and find these high-volume trading markets
where there are lots of people making bets about how AGI is going to go, so you can look at the share price of NVIDIA, you can look at various share prices of various companies involved in the generative AI wave, and you can see what they're saying, and that will give you different predictions about what's gonna come next with Gen AI, and then how can we, as PythonEasters,
develop the open-source tooling that we need in order to have excellent developer experience when building LLM apps. Thank you. Hi, thanks for a great talk. I have a question around data apps. The context is around data apps. Okay. The context is that I'm involved in improving the chat widgets and interface for all of this panel,
which is also a data app framework. Do you have some specific pains that we could work on solving that you could name and mention? Specific pains to work on solving, yes. So specifically, I think that currently the development process,
so I think that they're sort of trying to boil the ocean. I think the current pain points with the existing LLM tooling is that they're trying to support too many integrations at once because we don't know what's going to be baked online, and that's reasonable because right now
we have this great expansion of tools. I'm gonna have a contraction later, but I think this kind of approach where you have to hope that the particular part of Langchain connects or imbibes a particular part of another app we're designed to work with Langchain
is definitely like hoping the Langchain developers have done it a certain way approach, and I would like to have kind of greater control. So the issues I face is that when you have an object inside an object inside an object, and then you have to specify parameters for all of them, but currently the defaults being used are not as sensible as they could be. So I think that would really speed up
the developer experience for developers working with these tools. Thanks. Brilliant, yep. Excellent, thanks very much everyone.