This article examines legal mechanisms for risk management when implementing artificial intelligence (AI) systems in banking scoring. It analyzes international standards and approaches—GDPR, the draft EU AI Act, ISO/IEC 23894:2023—as well as the legislation of the Republic of Uzbekistan (the Law on Personal Data, the AI ​​Development Strategy until 2030, etc.). Based on a comparative analysis and case studies (Apple Card in the US, SCHUFA in Germany, Asia Alliance Bank in Uzbekistan), the key risks of using AI in credit scoring are identified: discrimination, lack of transparency in algorithms, violation of consumer rights, and inadequacy of models. It also examines existing legal measures to minimize these risks, such as requirements to prevent algorithmic discrimination, ensure transparency and explainability of decisions, protect personal data, and consumer rights to appeal automated decisions. The discussion section offers recommendations for improving the regulation of AI use in bank lending in Uzbekistan, taking into account international experience. These include the development of specific regulations for high-risk AI systems, mandatory risk assessments and algorithm audits, and stronger oversight of compliance with principles of fairness and transparency. Implementation of these measures will reduce the likelihood of algorithmic errors and abuses, increase trust in AI systems in the financial sector, and ensure a balance between innovation and the protection of citizens' rights.