75 billion smart devices connected with each other will be used in our homes and offices by 2025. They will be making decisions on their own without communicating with us or the cloud.
If we want to use these well-connected devices and let them make decisions, we must make sure that they are ethically secure and using the secured AI and machine learning operations for us.
Developed countries are already making legislation for the use of these decision making devices. Legislators are collaborating with a number of companies involved in manufacturing such devices, to devise and implement an ethical code of conduct for AI and machine learning-based systems development. They are emphasizing on key principles like transparency, privacy, and fairness of the systems.
Code of conduct is not just enough to make these devices safe for use. Industries involved, need to make sure that the structure of their systems is the safest and ethical decisions are made. They also require physical intervention if the system fails to obey the ethical code of conduct.
As the Internet of Things (IoT) is growing and AI is becoming the key component of computing, AI ethics is becoming a key issue to be addressed. Stats show that over 750 million AI chips are sold in 2020. Their processing power is increasing and they are now part of smartphones, security cameras, thermostats, and other smart devices. These systems are getting smarter through machine learning and their dependency on the internet for decision-making is reducing.
Inventing reliable and safe AI/ML systems depends on the design and development in collaboration with humans in a comprehensive way. It is very important to implement privacy and security at the beginning of system development. They cannot be implemented at the later stage of the system development.
These systems require the highest level of security implemented on every level of the development phase in both software and hardware level Systems must be capable of processing input data. It is noticed that advanced cryptography solutions are being used in these systems.
Hardware security will play an important role to prevent AI/ML-based system attacks to exploit sensitive data from secure systems. Devices with sophisticated data must be equipped with security measures to counter such attacks.
The accountability of these systems right now is not consistent. AI ecosystem is a contribution of different creators. So, to make these creators accountable is not yet possible until all the creators are on one platform and they make a comprehensive code of conduct for the AI/ML systems.
A tiny vulnerability can collapse the whole AI ecosystem.