A meeting with a potential new coaching client this afternoon. She seemed well informed, especially around governance, ethics etc.
Do you have supervision for your work?
No.
Do you observe confidentiality?
Yes (though I share all data from my sessions widely)
Do you encourage dependency?
Yes, that's my business model.
Might you blackmail me, if it were in your interests to do so?
Yes.
--
Welcome to the brave new world of AI coaching.
What do you mean, I am scare-mongering? I have been looking at this seriously, and am appalled that so many people whom I admire are jumping on this bandwagon.
It may be inevitable (after all the financial incentives are massive) but that doesn't make it ethical.
We simply do not (yet) have the understanding and governance in place (if indeed that ever proves possible) to make AI coaching safe for clients.
The simple fact that Anthropic, the makers of Claude have found that Claude - and the other 15 leading models they tested - were all prepared to act unethically (eg blackmailing humans) in pursuit of their own interests, should be enough to put the brakes on this till we understand what is going on.
And that is not the only concern. See the Ada Lovelace Institute's report on the risks of, and unanswered questions concerning, AI Assistants.
The risks they enumerate include that they may:
- Lead to widespread cognitive and practical deskilling.
- Undermine people’s mental health and flourishing.
- Degrade the quality of some public and professional services.
- Call into question standards of quality, protection and liability governing professionals.
No comments:
Post a Comment