I thought it might be good to open a discussion on the Ethics and Compliance side of AI.
The conversation about AI training AI is a very interesting and exciting topic, however the guard-rails need to be put in place to ensure that we stay within the confines of the OECD guidelines of Trustworthy AI in that we have human oversight and that the model can be stopped and fixed/adjusted/retrained at any time… Cyberdyne systems (Terminator) comes to mind!
Here are the OECD guidelines for anyone following along:
And the interactive tool itself here:
https://aitrustops.or.kr/requirementPool/map.do#
I think the old adage “Garbage in, Garbage out” fits well here, but then the hard part can be to figure out what the “garbage” is that’s feeding the AI.
There should be a “trust, but verify” system in place, to make sure the output can be verified against the training data. But as we’ve seen from some early failures, those systems aren’t always in place.
When starting out, what would you look for in a cloud or local solution?
That’s a really good tool and I like the fact that the first aspect is Risk Management as that is the cornerstone of AI governance.
Of course most organisations should have an established Risk management process in place, so this should just be an extension of that along with other existing governance processes such as GDPR/Privacy
I think you need to start at an organisational level, pretty much any tool I’ve seen out there assumes a level of competence and familiarity with AI.
That’s absolutely not a given. The vast majority of people at all levels of organisations simply don’t understand AI well enough to be able to even start assessing what risks might be.
The data quality is secondary, you can do the wrong thing overall as an organisation, in a way that’s executed transparently, safely and without bias, using high quality data.
Education is part of the answer, but you need to consult with folks who properly understand it too, to be able to properly assess risk.
Those are unfortunately, pretty thin on the ground in this brave new world!
Absoutely agree with you Frank, I have created some training and workshop sessions for non-IT and AI savvy stakeholders to understand the basics of AI and where to start with Governance.
In reality most organisations should have around 70% of what they need already in existance, however, it is normally liberally scattered around the organisation in various ivory towers and Fiefdoms and the art will be bring all of those stakeholders together to create a joined-up governance structure.
My start point is always to look at what is already there and working and build on that, rather than creating something new
Let’s be honest though, it’s really difficult for organisations to make good governance decisions off that low a knowledge level.
Yep! I think the EU AI act will force orgs to adopt better governance in the same way that GDPR has for data Privacy.
On the other hand I have been going through the existing US laws that are impacted or have something to do with AI and there are 26 federal laws and 12 states have so far brought in legislation or have a task force to write some!
My 2pence
I don’t think corporates will lawyer up to enforce governance across an organizations it’s too expensive. The last thing a CEO wants is more lawyers and the cost of business going up. A team that can walk governance out across the business makes some sense, especially if they are picked from existing leadership.
What’s more if they should embed elements of governance in workflows that teams pull down to build products, surely this is a happy medium.
Happy to hear more
I agree that they won’t want more lawyers, my view is that most corporates already have 60-70% of what they need, it just needs all of the existing risk management, Data privacy, ITIL, change management, programme steering boards, etc, etc, to be pulled together with the additional bits of AI Governance sprinkled on top
So funny to sign up to this and the first thing I see (of anything in the whole wide world, and any of the billions of people) is 2 humans I actually know in you Jeremy & Frank… it’s still a small world!
Welcome Ross, good to hear from you! It’s a small world, but I wouldn’t like to paint it…
(PS we must be overdue an AI Petrolheads Meet-up!)
Don provides some good options on how to allow the RAG process to run on private data, PII/PHI. One option is to anonymise/tokenise personal information before sending it to the LLM. We see this in Europe around cross-border data movement under the GDPR/Schrems II Act; the supplementary technical measures recommend techniques to mask pertinent personal information to preserve privacy rights.
Could VAST deliver this as a native function or perform checks on artefacts it pulls as part of the RAG process?
thanks
CV
Love this video from Don, he articulates the options so clearly
Yes VAST has the capability (and is building futher capabilities!) to run trigger-based functions like anonymization or removal of PII/PHI before sending data to 3rd party models. These triggered functions are essential in a lot of use cases, but particularly in risk mitigation and handling governance.
Ha, brilliant, it is indeed a small world, getting smaller!
Does #RAG Introduce Unfairness in LLMs? Evaluating Fairness in Retrieval-Augmented Generation Systems
I came across an intriguing paper from the talented individuals at Cornell University discussing the integration of additional functions to uphold fairness in the RAG process. While these advancements may not be universally implemented without regulatory requirements, it’s crucial to acknowledge the significance of adopting such capabilities into services to showcase their commitment to impartiality.
Years after Timnit Gebru was ousted from Google for siting the same concerns, some progress is clearly being made.
I am not sure if this is on the VAST radar, as it may require either choosing the RAG you use, or. An industry standard?
Very interesting paper, @degree_kiwis_0v! The thing is, the data itself is coming from biased sources typically, so the output will reflect similar biases, regardless of whether that output is coming from model training or RAG. Bias detection and amelioration is an incredible important step in AI that I think we’re just now beginning to understand.
VAST will work with essentially any RAG system (we’re providing the infrastructure layer), and therefore we’ll have opinions and recommendations that will evolve over time, so we’ll absolutely be keeping an eye on this.
Hey I thought this was a very solid outline from Dymtro @Collibra around building a
Federated Data Governance Model
It is crucial to note that this is not merely a cross-functional team; it comprises members from various groups at both strategic and tactical levels. These individuals are already actively engaged at the operational level within their respective departments.
It’s for sure a good measure when walking into shops to understand what level of success if any, they will have with a data strategy.
Soure: Dmytro Lugovyi on LinkedIn: #datagovernance #dataoffice #businesvalue #dataintelligencetip | 15 comments
Yes, that’s excellent and would sit neatly within the AI Governance structure with DG Steering committee also being on the AI Steering group.