AI generally prices based upon the amount of characters in an input and output.
Law is still predominantly priced based on minutes of time.
LLM pricing units are called tokens.
Whst is a token, and how are LLMs priced?
A token is a fundamental unit of data processed by algorithms, especially in natural language processing (NLP) and machine learning models. These are the building blocks of data to read (inputs) and generate (outputs) text.
Typically a token is a segment of a word - toke - roughly 4 characters or 3/4 of a word in English.
Providers often charge differently for less (about half) for input tokens (prompts sent to the model) and than output tokens (generated by the model).
Pricing varies based on the model's capabilities, size, and recency. Some providers offer volume discounts or tiered pricing for higher usage.
Here are some examples (per 1 million tokens) [This is 1500-3000 pages]:
- GPT-4: $30 for input, $60 for output
- GPT-3.5 Turbo: $1.50 for input, $2 for output
- Claude (Anthropic): Ranges from $0.25 to $15 for input, $1.25 to $75 for output, depending on the model version
So, the LESS CONTENT IN OR OUT, THE LESS EXPENSIVE WILL BE THE TOOL.
You now understand more around “prompt training” and why is everyone running around “training” people how to “write prompts.”
The better (more concise or exact) the prompt, generally, the lower the cost.
Oh yeah and, run out of tokens, and no more inputs or outputs — the LLM shuts down!
CONTRAST THIS WITH LEGAL PRICING WHICH IS PREDOMINANTLY STILL DONE IN TIME FACTORS - DOWN TO THE MINUTE, USUALLY ONE-TENTH (1/10) OF AN HOUR OR SIX (6) MINUTE INCREMENTS.
Can you see it yet?
With Law veritably swimming around in LLMs and people of the public for the first time in history “doing” retail self-service law, they know that the briefer their ask the less they will spend.
Corrolarily, the smaller OUTPUTS they can extract from the LLM, also the less they will spend.
CONSUMERS OF LAW ARE BEING HABITUATED TO (A) ASK BRIEFLY, AND (B) REQUIRE/DEMAND BREVITY.
Now take this same consumer of law, accustomed to LLM-fetched (sometimes right sometimes wrong) Legal, and drop them into your lawyer’s office down the street.
Will their resistance to paying for ongoing minutes of time for Law be greater, or lesser, as an experience of thier Law-by-LLM training?
Far far greater.
“One toke over the line, counselor”
It may seem and is imperceptible at this time. However, the more consumers of Law come to learn how fast they can gain information, and that a lack of rapidity means enhanced price, the more of these consumers who will “push back” on the old model of Law in which the lawyer sets the price metric per minute and then tells consumer “sit back and wait, I will handle this and let you know.”
It’s psychology and consumer behavior 101. The AI pricing model will erode even further the billable hour (nee minute) model. How will lawyers adapt? Can they really hold together the time metric which client knows that they control?
Let’s play us out with an old Brewer & Shipley classic. Take it away boys!