and to say it's not ready for prime time is the understatement of the year.
I have been kicking the tires on it more than most others in my role (because I don't think AI will replace paralegals entirely, at least not before I retire within the next 10-15 years, but paralegals WILL be replaced by the ones who best know how to leverage AI).
Not only does it not help, it just wastes time I could have spent just doing the tasks the old-fashioned way (like taking the time to manually cross-check information from two different data sources). Instead, I find myself arguing with it for two hours, pointing out what it got wrong (or made up out of whole cloth - not kidding, it COMPLETELY hallucinates things that aren't even there). And, every time I point out its mistakes, it essentially just goes "My bad, thanks for catching that!" Regurgitates data again and, even though it may have fixed the most recent issue, it continually re-introduces errors I'd already corrected in previous iterations of the prompts. Wash, lather, rinse, repeat. It's like playing whack-a-mole.
Just one example: you have a list of the elected/appointed officers and directors of a company and its several subsidiaries who are authorized to sign documents, organized neatly in a table. You ask it to double-check said list against a set of signature blocks to documents those companies need to sign and verify the proper names and titles of the individuals signing on behalf of each company. It fails in EPIC fashion. If I relied on its results, we would have scores of documents signed by the wrong people, which could mean they aren't legally binding or that my firm could get sued for issuing an opinion letter that the agreements have been duly executed.
In this case, all that's at risk is other people's money, but what about applications where AI is being stuffed into programs used for healthcare, airplane safety, self-driving vehicles, firefighting????