My long-term research goal is to answer the following intriguing question: how can we enable machines to automatically design effective interventions (e.g., mechanisms, contracts, and information structures) that can promote cooperation among strategic agents (e.g., machines, and humans) for desired outcomes (e.g., public good, profit maximization, manipulation prevention)? A critical step is to understand the roles of incentives, institutions and norms in large-scale multi-agent interactions through prediction, learning and games. To this end, I delve into the interdisciplinary research that spans multi-agent systems, game theory, human-agent interaction, and online learning. My current work focuses on: (1) resilient mechanism design, and its application to online platforms and networks where agents may not be able to obtain perfect rationality; (2) strategic diffusion in large-scale networks with application to advertisement, auctions, and cybersecurity; (3) online methods in machine learning with their applications to strategic decision making (I'm particularly interested in learning algorithms that can work well with "small" data); and (4) trustworthy machine learning. Typical methodologies include information design, mechanism design, social influence theory, learning and optimization in sequential decision making, and multi-agent simulation.