06-04, 15:00–15:40 (Europe/London), Beaumont
The ChatGPT and the GPT models by OpenAI have brought about a revolution in the way we think about the world and not only how we write texts, but how we can process information about the world. Let's discusses the capabilities, and limitations of large language models including ChatGPT, about possible applications, tooling, data security, wider societal implications, and ethics. Some applications have gone as far as automating data analysis so this also poses a question about the future of data science.
Let's talk about large language models, what we can do with them, and the future of machine learning and data science. I am writing a book about LangChain and large language models, and I'm looking forward to a lively discussion with lots of ideas and big questions.
No previous knowledge expected
Ben is a machine learning engineer and developer. With a PhD in computer science from KTH, he simulated brain connectivity on high-performance computers (up to 64k cores), authored scientific papers on feature selection and clustering, and designed and implemented a decision engine processing hundreds of thousands of financial transactions per day. As part of his previous work, he's trained large language models (deep learning) on millions of text documents for the purpose of information extraction from text documents.