Input risk in LLM

Doug Slater talking about input risk when using LLM to code.

An LLM does not challenge a prompt which is leading or whose assumptions are flawed or context is incomplete. Example: An engineer prompts, “Provide a thread-safe list implementation in C#” and receives 200 lines of flawless, correct code. It’s still the wrong answer, because the question should have been, “How can I make this code thread-safe?” and whose answer is “Use System.Collections.Concurrent” and 1 line of code. The LLM is not able to recognize an instance of the XY problem because it was not asked to.

The post covers a lot more ground on the risks involved with LLM generated code. Another thing that caught my attention was:

LLMs accelerates incompetence.

Simon Willison talks about the other side when he says:

LLMs amplify existing expertise

The conclusion is: If you are smart, LLMs can make you—or at least make you sound—smarter. If you are dumb, LLMs will make you dumber, without you ever knowing.

Filed under