My long-term research goal is to answer the following intriguing question: how can we establish and maintain the interactions between humans and agents (e.g., machines, and humans) so that we can achieve desired outcomes (e.g., social good, profit maximization)? A critical step is to understand the roles of incentives, institutions and norms in large-scale multi-agent interactions through prediction, learning and games. To this end, I delve into the interdisciplinary research that spans multi-agent systems, game theory, human-agent interaction, and online learning. My current work focuses on: (1) resilient mechanism design, and its application to online platforms and networks where agents may not be able to obtain perfect rationality; (2) strategic diffusion in large-scale networks with application to advertisement, auctions, and cybersecurity; (3) online methods in machine learning with their applications to strategic decision making (I'm particularly interested in learning algorithms that can work well with "small" data); and (4) fairness, accountability, transparency, and ethics in artificial intelligence. Typical methodologies include information design, mechanism design, social influence theory, learning and optimization in sequential decision making, and multi-agent simulation.