When I search this topic online, I always find either wrong information or advertising lies. So what is actually something that LLMs can do very well, as in being actually useful and not just outputing a nonsensical word salad that sounds coherent.
Results
So basically from what I’ve read, most people use it for natural language processing problems.
Example: turn this infodump into a bullet point list, or turn this bullet point list into a coherent text, help me with rephrasing this text, word association, etc.
Other people use it for simple questions that it can answer with a database of verified sources.
Also, a few people use it as struggle duck, basically helping alleviate writers block.
Thanks guys.
Overcoming writers block or whatever you want to call it
Like writing an obit or thank you message that doesn’t sound stupid. I just need a sentence down to work from, even though it doesn’t make it to final draft.
Or I needed to come up with activities to teach 4th graders about aerodynamics for a STEM outreach thing. None of the output from LLM was usable as it was spit out but was enough for me to kickstart real ideas
Same. It’s gets me started on things, even if I use very little or even non of its actual output.
This is a great use i use it for similar purpose it’s great brainstorming ideas. Even if it’s ideas are bullshit cause it made it up it can spark an idea in me that’s not.
Yes, it’s like the rubberducking technique, with a rubber duck that actually responds.
Sometimes even just trying to articulate a question is a good first step for finding the solution. A LLM can help with this process.
That’s about where I land. I’ve used it the other way, too, to help tighten up a good short story I’d written where my tone and tense was all over the place.
I’ve used LLMs to write automated tests for my code, too. They’re not hard to write, just super tedious.
As a developer, I use LLMs as sort of a search engine, I ask things like how to use a certain function, or how to fix a build error. I try to avoid asking for code because often the generated code doesn’t work or uses made up or deprecated functions.
As a teacher, I use it to generate data for exercises, they’re especially useful for populating databases and generating text files in a certain format that need to be parsed. I tried asking for ideas for new exercises but they always suck.
Website building
Would you mind expanding on this? How do you use the LLM to aid in building websites?
Copying some HTML and CSS code into the llm and saying “change it to make it do xxxxxxx”
It has helped with some simple javascript bookmarklets
Translation and summarisation of text. Though, I do double check.Also, getting an initial draft for some mails or rephrasing mails that I want to make more formal+concise.
I use it to review my meeting notes.
- “Based on the following daily notes, what should I follow-up on in my next meeting with #SomeTeamTag?”
- “Based on the following daily notes, what has the #SomeTeamTag accomplished the past month?”
- etc.
I’m not counting on it to not miss anything, but it jogs my memory, it does often pull out things I completely forgot about, and it lets me get away with being super lazy. Whoops, 5 minutes before a meeting I forgot about? Suddenly I can follow up on things that were talked about last meeting. Or, for sprint retrospectives, give feedback that is accurate.
While this is something LLMs are decent at, I feel this is only of value if your notes are unstructured, and it presents infosec concerns.
I guess my notes are unstructured, as in they’re what I type as I’m in the meeting. I’m a “more is better” sort of note taker, so it’s definitely faster to let AI pull things out.
Infosec … I guess people will have to evaluate that for themselves. Certainly, for my use case there’s no concern.
kill time
They help me make better searches. I use ChatGPT to get a good idea of what better to search for based on my inquiry. It tells me what I am looking for, and then just use a search engine based on that.
Also, taught me some python and appscript. Currently learning and testing its capabilities in JavaScript teaching. And, yes I test out everything it gives me. It is best to output small blocks of code and lice it together. Hoping for the best and then, 3 years later finally create an app lol because that is on my end. Still working on an organization app. 80 percent accurate on following complete directions in this case.
Literally nothing because it’s fucking useless.
so brave
I ask it increasingly absurd riddles and laugh when it hallucinates and tells me something even more absurd.
I use it to help me come up with better wording for things. A few examples:
-
Writing annual goals for my team. I had an outline of what I wanted my goals to be, but wanted to get well written detail about what it looks like to meet or exceed expectations on each goal and to create some variations based on a couple of different job types.
-
Brainstorming interview questions. I can use the job description and other information to come up with a starting list of questions and then challenge the LLM to describe how the question is useful. I rarely use the results as-is, but it helps me to think through my interview plan better than just using a list of generic questions.
-
Converting a stream of thought bullet list into a well written communication.
-
Philosophy.
Ask it to act as Socrates, pick a topic and it will help you with introspection.
This is good for examining your biases.
e.g. I want to examine the role of government employees.
e.g. when is it ok to give up on an idea?A fringe case I’ve found ChatGPT very useful is to learn more about information that is plentiful but buried in dead threads in various old school web forums and thus very hard to Google. Like other people’s experiences from homebrewing. Then I ask it for sources and most often it is accurate to the claims of other homebrewers that also can be correct or less correct.
I have it make me excel formulas that I know are possible, but I can’t remember the names or makeup for. Afterwords I always ask “what’s a better way to display this data?” And I sometimes get a good response. Because of data security reasons I dont give it any real data but we have an internal one I can use for such things and I sometimes throw spreadsheets in for random queries that I can make in plain language.
Just rewrote my corporate IT policies. I feed it all the old policies and a huge essay of criteria, styles, business goals etc. then created a bunch of new policies. I have chatgpt interview me about the new policies, I don’t trust what it outputs until I review it in detail and I ask it things like
What do other similar themed policies have that I don’t? How is the policy going to be hard to enforce? What are my obligations annually, quarterly and so on?
What forms should I have in place to capture information ( i.e. consultant onboarding).
I can do it all myself but it would be slower and more likely to have consistency and grammatical errors.
I find they’re pretty good at some coding tasks. For example, it’s very easy to make a reasonable UI given a sample JSON payload you might get from an endpoint. They’re good at doing stuff like crafting farily complex SQL queries or making shell scripts. As long as the task is reasonably focused, they tend to get it right a lot of the time. I find they’re also useful for discovering language features working with languages I’m not as familiar with. I also find LLMs are great at translation and transcribing images. They’re also useful for summaries and finding information within documents, including codebases. I’ve found it makes it a lot easier to search through papers where you might want to find relationships between concepts or definitions for things. They’re also good at subtitle generation and well as doing text to speech tasks. Another task I find they’re great at is proofreading and providing suggestions for phrasing. They can also make a good sounding board. If there’s a topic you understand, and you just want to bounce ideas off, it’s great to be able to talk through that with a LLM. Often the output it produces can stimulate a new idea in my head. I also use LLM as a tutor when I practice Chinese, they’re great for doing free form conversational practice when learning a new language. These are a just a few areas I use LLMs in on nearly daily basis now.
I use LLMs to generate unit tests, among other things that are pretty much already described here. It helps me discover edge cases I haven’t considered before, regardless if the generated unit tests themselves pass correctly or not.
Oh yeah that’s a good use case as well, it’s a kind of a low risk and tedious task where these things excel at.