Artificial Intelligence (AI) is crucial for many systems people use today. From voice-activated speech systems in homes to modern driving automation systems, AI is used to gather, interpret, and make decisions in a similar way to humans. AI has demonstrated super-human capabilities. However, the more complex an AI system, the greater the chance of erroneous conclusions or decisions. Humans’ perception of the AI system and their trust affect the acceptance and usage of the system. Our research focuses on the collaboration between the user and AI by leveraging the advantages of both parties. The goal is to provide design considerations to help the user understand the capabilities and limitations of the AI system, and calibrate their trust to ensure the needed intervention when the AI makes errors or fails to respond to malicious cyberattacks.