Meta is automating up to 90% of its risk assessments for new features and updates across Facebook and Instagram, shifting from human reviewers to AI-powered systems. This move aims to streamline decision-making and accelerate product launches, but it raises concerns among current and former employees about the AI’s ability to accurately identify and prevent real-world harm. While Meta states that human expertise will still handle “novel and complex issues,” critics fear this automation could lead to increased privacy risks, the spread of harmful content, and the erosion of safeguards previously in place.