10 Powerful Women Leaders Discuss Keeping AI Safe for Humanity 'History has shown that whenever a great invention gets into the wrong hands, evil tends to prevail. Right now, we're in the early stages of AI and currently exploring the many potential benefits of using AI for good.'

By Tyler Gallagher

entrepreneur daily

This story originally appeared on Authority Magazine

via Authority Magazine

Opportunities for women in tech are opening up, but there is still a lot of room for improvement. Currently, about one in four tech jobs are held by women.

In an effort to highlight some of the accomplished women in this sector, Authority Magazine interviewed women leaders in artificial intelligence as part of a series. Each was asked the following question: "As you know, there is an ongoing debate between prominent scientists about whether advanced AI has the future potential to pose a danger to humanity. This debate has been personified as a debate between Elon Musk and Mark Zuckerberg. What is your position about this? What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?"

Here are some key responses.

These interviews have been edited for length and clarity.

via Authority Magazine

Anna Bethke (AI Head at Intel)

I think there is a larger danger from people using AI in a way that is dangerous to humanity than from the AI itself. Just like a hammer or any tool can be used both for good, like making a birdhouse, and harm, like smashing a vase, AI can also be used for both. While there may be a potential of an AI entity learning to a point that it becomes self-aware and malicious, this will likely be extremely difficult to do. Individuals have already created malicious AI by creating programs to do denial of service attacks, harass individuals on social media and more. While we shouldn't simply sweep the possibility of a self-aware malevolent AI under the rug, we should be having a conversation of maliciously created AI more often. Discussions around this topic are key, both to hypothesize about any harmful results from AI systems, to determine ways to mitigate or stop these negative outcomes, as well as to inform the public about how and when the systems will be used. The more transparency that we have for technology as it is developed and implemented, the more that everyone can be informed, and have a voice to raise any concerns. It's important to keep human judgement in the loop at some level, from the design, test and implementation of a system, to correcting any adverse behavior either immediately or in batch upgrades as necessary. (Full context here.)

via Authority Magazine

Carolina Barcenas (SVP at Visa)

I'm of the opinion that technology is our partner. Technology certainly brings changes, but I believe in our society's inherent ability to adapt. While some jobs might disappear (those are that repetitive and perhaps that provide little satisfaction), new jobs will be created. This current era of technological innovation isn't unprecedented and is reminiscent of the industrial revolution. Ultimately, I believe the technology, when used correctly, will benefit our lives and we are just at the beginning of a significant transformation. However, I also recognize that the technology in the wrong hands can be devastating, which is why effective regulation around the ethical use of AI is vital. There has already been a lot of emphasis on the use of data and privacy protection. While Europe is leading the way, other countries are following; however, we are still at a nascent stage. I think it's important that we educate people on what AI really is and the challenges we need to address. (Full context here.)

via Authority Magazine

Tatiana Mejia (AI Head at Adobe)

AI is a tool we developed and, as a community, we are responsible for guiding how it is created and how we use it. The technology in and of itself is neither good nor bad. At Adobe, we talk about our AI serving the creator and respecting the consumer. We are closely examining issues and implementing processes and guidelines for data privacy, data governance, data diversity, and more. This is where it starts — with education, transparency and an open, ongoing dialog. We also occasionally introduce experimental AI-powered technology "sneaks." This is where we can share new potential features and capabilities, some of which never make their way into products, so that we can get real, immediate feedback from our customers and community about what those innovations mean to them — what's useful, what can be improved and what can be more impactful and meaningful to creatives and marketers. (Full context here.)

via Authority Magazine

Marie Hagman (Artificial Intelligence Director at Zillow)

Throughout history people have always used new technology to do amazing things, and also to inflict suffering on others. Nuclear energy and atomic bombs. The role social media played during the Arab Spring to help people politically organize, to its role today in spreading misinformation. I'm an optimist and think the benefits will far outweigh the risks. We need to keep advancing technology, while at the same time put safety measures in place and move more quickly to create and update public policy to help mitigate those risks. I don't think we can or should try to assure people there's nothing to worry about. People should be informed of the risks and be vigilant. We have systems in place to protect society from bad actors, including law enforcement, the judicial system and public policy, and we should have leaders in those arenas that are informed and prioritize protecting against threats AI poses. Public policy is often so slow to catch up to technology. (Full context here.)

via Authority Magazine

Madelaine Daianu, Ph.D. (Senior Manager of Data Science at Intuit)

I anticipate that these debates will only intensify as we advance our technologies. Some argue that these debates are premature, mainly because today's AI systems are very limited. We haven't yet discovered what it entails to build superintelligence; therefore, we do not have all the right information yet to determine how best to mitigate any potential harm. I'd encourage the tech community to think about the ramifications of AI end-to-end — both short- and long-term. It is our responsibility, as contributors to this technology, to plan for the potential implications that AI can have. (Full context here.)

via Authority Magazine

Doris Yang (Senior Director at BlackBerry Cylance)

The current state of AI relies on humans to define the boundaries of the questions and answers we're looking for and the data needed to process them; robots are not going to take over the world tomorrow. That said, regardless of how long it will take for sentient robots to be even a possibility, sitting down as a society to talk about the responsible usage and application of AI is absolutely important. Even today, without Terminators running around, there's already the question of privacy and transparency (as above, regarding what personal data is collected, what is it used for and who has access to it). There are many immediate benefits of AI-driven solutions, but we should also be actively thinking about responsible governance so that we can continue to benefit from them without giving up more of our freedoms. I don't want to say there's absolutely nothing to worry about because that would promote complacency. We need to start laying the groundwork now for what our basic boundaries are and remain vigilant that technologies and other entities don't encroach upon them. Sentient robots aren't the only cause for concern. As a start, people should be thinking about privacy and security. Simple things like being able to predict what a person likes, what they're inclined to do and how they will respond have great applications in terms of AI-driven assistants, shopping recommendations, marketing campaigns and the like. But what happens when the wrong people get access to that information? Should banks be able to reject you for a loan based on something you might do? Should the government be able to create and maintain profiles on its citizens and rank them according to probabilities or likely outcomes? (Full context here.)

via Authority Magazine

Marina Arnaout (Manager at Microsoft)

When you remove mathematics from the equation, it is all a philosophical debate. History has shown that whenever a great invention gets into the wrong hands, evil tends to prevail. Right now we're in the early stages of AI and currently exploring the many potential benefits of using AI for good, with initiatives which can help prevent natural disasters, or provide greater assistance to the blind and, of course, the many forms of business intelligence. AI has potential to empower communities and businesses and possesses many benefits. However, it's without a doubt advanced AI can present various downsides. It could have significant economic impact by substantially altering the workforce at a pace humans can't keep up with, and presents many ethical challenges. More directly, AI could be programmed to destruct — or even programmed to do something beneficial but use a destructive method to achieve it. This is a global issue. It sits with the political and business leaders of the world, who need to partake in this equally with representation from both East and West. Global leaders from different political systems need to align to ensure there is symmetry in ethics, data protection and guidelines with full transparency to the public. (Full context here.)

via Authority Magazine

Elena Fersman (Head of AI Research at Ericsson)

Every new technology brings a threat. While AI is special, since it's self-evolving, we should still have conditions or boundaries set in stone for its development. These conditions are in the hands of humans. We need to take responsibility and dictate the "rules of the game" that should never be broken by these algorithms. These rules are no different from what we humans have to comply with — laws, regulations, ethical frameworks and social norms. To prevent such concerns, the "rules to live by" should be implemented to AI systems in the language they understand. That is, we need to digitalize these rules, make sure that the frameworks and software architectures are flexible and allow for modifications and that algorithms can guarantee that they adhere to these frameworks. (Full context here.)

via Authority Magazine

Chao Han (VP at Lucidworks)

AI can be applied to many applications such as healthcare and self driving cars which will benefit society. However, misuse of AI by dangerous groups (such as terrorists) in areas of weapon development or gene editing is concerning. Currently, AI doesn't have the self-learning ability to harm people intentionally or connect to each other through the internet to perform large scale damage, but those behaviors are possible if not carefully controlled in the future, not in the near future though. Currently there isn't any centralized organization to regulate AI industry and researches. We highly rely on the big IT companies to have good intentions and follow standard model building process to prevent AI bias or misuse. I hope such organization can be established as soon as possible to gain more public trust. (Full context here.)

via Authority Magazine

Trisala Chandaria (CEO of Temboo)

All new technologies provide opportunities but also have the potential to post a danger to humanity. That's the nature of technology. Think about when cars were first introduced. We had to build infrastructure, safety measures, and other technologies to accommodate them. Things like roads, driver's licenses, seatbelts and airbags were all needed to ensure that the public could safely and effectively use cars. Similarly, there has to be a multi-faceted approach for implementing new technologies on the part of both the public and private sector. We need to take into account safety, infrastructure, laws, and more when we build out these technologies. As long as we those things into consideration, we can use AI for good. However, those measures are still being built out, and need to be prioritized at this time. We shouldn't try to prevent people from having concerns. A concerned public is an informed public. People should ask questions about how their data is being used, which technologies are touching their lives and how AI is impacting society. There's a difference between concerns and fears. We don't want to scare people off of new technologies, but we do want to make sure they're given choices around the technologies that affect their daily lives. It's up to the leaders in the AI world to make sure that the public feels empowered and informed enough to make those choices. (Full context here.)

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Side Hustle

The Remote Side Hustle a 43-Year-Old Musician Works on for 1 Hour a Day Earns Nearly $3,000 a Month: 'All From the Comfort of Home'

Sam Ziegler wanted to supplement his income as a professional drummer — then his tech skills and desire to help people came together.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business Ideas

From $0 to $10 Million — Here's How to Build a Winning Prop Trading Team

Do you need to know what skill set is required to be a successful leader in one of the most rapidly changing niches in finance? This article sums up the recommendations of a "prop trading" pioneer.