Most of our white collar jobs are about knowledge sharing and synchronization between people.
And surprisingly this is an aspect in which I see very very little progress.
The most we have are tools like confluence or Jira that are actually quite bad in my opinion.
The bad part is how knowledge is shared. At the moment is just formatted text with a questionable search.
LLMs I believe can help in synthesize what knowledge is there and what is missing.
Moreover it would be possible to ask what is missing or what could be improved. And it would be possible to continuously test the knowledge base, asking the model question about the topic and checking the answer.
I am working on a prototype and it is looking great. If someone is interested, please let me know.
knowledge is power and people don't always want to share. Maybe it's more reflective of my company culture but I've seen knowledge effectively hoarded and used strategically as a weapon at times.
It is visible everywhere. Some people hoard knowledge so that they stay important in the company. Some people hoard knowledge so that they can get more money from bug bounties. It is almost always about personal gain.
Of course there is no upside for spending time updating documentation unless it actually is part of your job description or there is legal requirement for company.
If you put knowledge in wiki, no one will read it and they will keep asking about stuff anyway.
Then if you put it there and keep it up to date you open yourself to a bunch of attacks from unhappy coworkers who might use it as a weapon nagging that you did not do good job or find some gaps they can nag about.
> LLMs I believe can help in synthesize what knowledge is there and what is missing.
How could the LLM help?
Given that it is missing the critical context and knowledge described in the article, wouldn’t it be (at best) on par with a new developer making guesses about a codebase?
As engineers we often aim to perfection, but oftentimes it is not really needed. And this is such case.
Knowledge is organised into topic, and each topic has a title and a goal. Topics are made of markdown chunks.
I see the model being able to generate insightful questions about what is missing to the chunks. As well as synthesise good answer for specific queries.
I think companies have a lot of data in systems like confluence and JIRA and their chat solution which is hard to find and people in the company don't even know that it might be there to search for it.
An LLM that was trained up on these sources might be very powerful at helping people not to solve the same problem many times over.
The problem isnt the interface it's the access, having everything in one place vs fragmented across different systems, different departments
I built a chatbot under the same assumption you have for a large ad agency in 2017, an "analyst assistant" for pointing to work that's already been done, offering to run scripts that were written years ago so you don't have to write them from scratch
Through user testing the chat interface was essentially reduced to drop-down menus of various categories of documentation, but actually it was the hype of having a chatbot that justified the funding to pull all the resources together into one database with the proper access controls.
I would expect after you went through the trouble of training an LLM on all that data, people using the system would just use the search function on the database itself instead of chatting with it, but be grateful management finally lifted all the information silo-ing.
Some of these companies aren't delightedly eager to make it cheap to access the data you have entered into their systems. It's like they own your data in a sense and want to make it harder to leave.
I love your point about the chatbot being the catalyst for doing something obvious. I curate a page for my team with all the common links to important documentation and services and find myself nevertheless posting that link over and over again to the same people because nobody can be bothered to bookmark the blasted thing. Sometimes I feel it's pointless making any effort to improve but I think you have a clever solution.
The other aspect of it, IMO is that searching for the obvious terms doesn't always return the critical information. That might be my company's penchant for frequently changing the term it likes to use for something - as Architects decide on "better terminology". I imagine an LLM somehow helping to get past this need for absolute precision in search terms - but perhaps that's just wishful thinking.
And surprisingly this is an aspect in which I see very very little progress.
The most we have are tools like confluence or Jira that are actually quite bad in my opinion.
The bad part is how knowledge is shared. At the moment is just formatted text with a questionable search.
LLMs I believe can help in synthesize what knowledge is there and what is missing.
Moreover it would be possible to ask what is missing or what could be improved. And it would be possible to continuously test the knowledge base, asking the model question about the topic and checking the answer.
I am working on a prototype and it is looking great. If someone is interested, please let me know.