Abstract:This study applies the “Human-Machine-Environment” framework to examine algorithmic security risks across four phasesincluding generation,amplification,diffusion and outbreak.These risks emerge from risk sources,catalytic triggers,and transmission channels,exacerbated by interdependent ontological,managerial and environmental factors,resulting in information cocoons,algorithmic weaponization,and distorted power dynamics.European Union (EU)and U.S.governance approaches differ significantly in objectives,stakeholders,regulatory priorities and implementation frameworks.To avoid falling into the Collingridge Dilemma in algorithmic governance,China should refer to and learn from the experience of algorithm security governance in EU and U.S.,starting from four dimensions:concept,management,technology and mode,by balancing human-centered values with security-development synergies,establishing an integrated governance architecture that combines centralized coordination and decentralized adaptability,harnessing intelligent technologies to embed ethical principles in algorithmic design,and fostering multi-stakeholder collaborative mechanisms.This achieves comprehensive governance of algorithm security.