I am totally blissed out today by the fact that the 7010-2020 – IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being standard (aka, P7010) is finally live and ready for people to start using it!
Here’s the link: https://standards.ieee.org/content/ieee-standards/en/standard/7010-2020.html
I started tracking my time on various unpaid, yet work/development/volunteer related projects earlier this year, but not nearly soon enough to capture how much time I spent working on this standard over the past two years or so. Whew! I am very in awe of the leaders, like my long-time mentor, Laura Musikanski, who served as the working group chair. She and the other leaders spent way more time than I did, and demonstrated exceptional patience and people skills in helping all of us work together in a harmonious way.
I really feel this standard would be helpful for any organization that wants to evaluate, improve, or monitor systems that use, or will use AI. In my consulting work I advocate for its use in helping startups who need a starting point to evaluate and improve their product outside of expensive clinical trials. I worked really hard to include as much useful information as I could in the standard. Annex B is one of the outputs of my lobbying, as we put a lot of time into developing this annex as an informative reference guide with example indicators that could be of help to AI developers in many industries. While we have some for commercially oriented developers like advertising, autonomous vehicles, etc., we also have example indicator sets for wellbeing oriented technology including examples relating to: stress reduction; human resources, healthcare, and personal assistant AI. Annex A also include extensive resource lists for product development in regards to values and ethics.
It was a great learning process to participate in this project, and I hope to be able to help others use it for many years to come. I’m planning to write or develop some other, more specific follow up resources, articles, etc. So if there’s anything you feel is under-explained in the standard that you’d like to know more about, let me know!
Comments
It was indeed a privilege to help develop this standard and also walk away feeling I had been able to contribute substantially. It was immensely educational to work with such a diversity of professionals who do not call themselves engineers. This is a testament to IEEE’s growing emphasis on humanitarian outcomes, and for this standard in particular, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, spearheaded by John Havens.
I started out like many of my peers as a hard-core electrical engineer: a socially-inept geek with “the knack” (Dilbert: https://www.youtube.com/watch?v=Dx6HojLBsnw). My passion for understanding the greatest engineering marvel – the human body and in particular, the mind – has led me to understand in a very real engineering sense that we operate in a vast sea of external factors that deeply influence the functioning and survival of this marvel. We cannot ignore that sea, for much of it is detrimental to our well-being. IEEE 7010-2020 provides a way for designers to think about that sea, select which aspects are relevant to their projects, and design their systems in a way that allows them to measure the impacts of their systems on human well-being.
The transformation from regular old engineer to humanitarian engineer has been a healthy, educational and exciting one, one I don’t plan on slowing. I’m also recommending it for all engineers, especially the young ones. We need to be systems engineers, not specialists, and the “systems” in this case include humans-in-the-middle. Optimization – the holy grail of all engineers – means making sure the mind and body are freed/supported to function to their full potential. We cannot do that without understanding and designing around well-being.