コレクションでコンテンツを整理
必要に応じて、コンテンツの保存と分類を行います。
TensorFlow を使用して責任ある AI への取り組みを ML ワークフローに統合する方法をご確認ください
TensorFlow は ML コミュニティとさまざまなリソースやツールを共有することで、責任ある AI 開発の推進に取り組んでいます。
AI に関するおすすめのベスト プラクティス
AI システムを設計する際は、ソフトウェア開発のベスト プラクティスに沿って進めつつ、ML に対しては人間中心のアプローチを取る必要があります。
公平さ
さまざまなセクターや社会に AI の影響が広がる今日、誰にとっても公正でインクルーシブなシステムを目指すことが重要です。
解釈可能性
意図したとおりに動作させるには、AI システムを理解し、信頼することが大切です
プライバシー
機密データを使用してモデルをトレーニングする際はプライバシー保護対策が必要です
セキュリティ
潜在的な脅威を特定して、AI システムの安全性を確保できます
ML Metadata
ML デベロッパーとデータ サイエンティストのワークフローに関連するメタデータを記録および取得します。
モデルカード
機械学習に関する重要な事実を構造化して整理します。
責任ある AI DevPost の取り組み
参加者に TensorFlow 2.2 を使って、責任ある AI の原則を踏まえたモデルやアプリケーションを構築してもらいました。ギャラリーで、受賞者や素晴らしいプロジェクトをチェックしてみましょう。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["必要な情報がない","missingTheInformationINeed","thumb-down"],["複雑すぎる / 手順が多すぎる","tooComplicatedTooManySteps","thumb-down"],["最新ではない","outOfDate","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["サンプル / コードに問題がある","samplesCodeIssue","thumb-down"],["その他","otherDown","thumb-down"]],[],[],[],null,["# Responsible AI\n\nLearn how to integrate Responsible AI practices into your ML workflow using TensorFlow\n======================================================================================\n\nTensorFlow is committed to helping make progress in the responsible development of AI by sharing a collection of resources and tools with the ML community. \n\nWhat is Responsible AI?\n-----------------------\n\nThe development of AI is creating new opportunities to solve challenging, real-world problems. It is also raising new questions about the best way to build AI systems that benefit everyone. \n\n#### Recommended best practices for AI\n\nDesigning AI systems should follow software development best practices while taking a human-centered \napproach to ML \n\n#### Fairness\n\nAs the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive to everyone \n\n#### Interpretability\n\nUnderstanding and trusting AI systems is important to ensuring they are working as intended \n\n#### Privacy\n\nTraining models off of sensitive data needs privacy preserving safeguards \n\n#### Security\n\nIdentifying potential threats can help keep AI systems safe and secure \n[Learn more about Google's Responsible AI Practices](https://ai.google/responsibilities/responsible-ai-practices/) \n\nResponsible AI in your ML workflow\n----------------------------------\n\nResponsible AI practices can be incorporated at every step of the ML workflow. Here are some key questions to consider at each stage. \nDefine problem Construct and prepare data Build and train model Evaluate model Deploy and monitor \n\n### Who is my ML system for?\n\nThe way actual users experience your system is essential to assessing\nthe true impact of its predictions, recommendations, and decisions. Make\nsure to get input from a diverse set of users early on in your\ndevelopment process. \n\n### Am I using a representative dataset?\n\nIs your data sampled in a way that represents your users (e.g. will be\nused for all ages, but you only have training data from senior citizens)\nand the real-world setting (e.g. will be used year-round, but you only\nhave training data from the summer)? \n\n### Is there real-world/human bias in my data?\n\nUnderlying biases in data can contribute to complex feedback loops that\nreinforce existing stereotypes. \n\n### What methods should I use to train my model?\n\nUse training methods that build fairness, interpretability, privacy, and\nsecurity into the model. \n\n### How is my model performing?\n\nEvaluate user experience in real-world scenarios across a broad spectrum\nof users, use cases, and contexts of use. Test and iterate in dogfood\nfirst, followed by continued testing after launch. \n\n### Are there complex feedback loops?\n\nEven if everything in the overall system design is carefully crafted,\nML-based models rarely operate with 100% perfection when applied to\nreal, live data. When an issue occurs in a live product, consider\nwhether it aligns with any existing societal disadvantages, and how it\nwill be impacted by both short- and long-term solutions. \n\nResponsible AI tools for TensorFlow\n-----------------------------------\n\nThe TensorFlow ecosystem has a suite of tools and resources to help tackle some of the questions above. \nStep 1\n\nDefine problem\n--------------\n\nUse the following resources to design models with Responsible AI in mind. \n[People + AI Research (PAIR) Guidebook](https://pair.withgoogle.com/guidebook/) \nLearn more about the AI development process and key considerations. \n[Learn more](https://pair.withgoogle.com/guidebook/) \n[PAIR Explorables](https://pair.withgoogle.com/explorables/) \nExplore, via interactive visualizations, key questions and concepts in the realm of Responsible AI. \n[Learn more](https://pair.withgoogle.com/explorables/) \nStep 2\n\nConstruct and prepare data\n--------------------------\n\nUse the following tools to examine data for potential biases. \n[Know Your Data (Beta)](https://knowyourdata.withgoogle.com/) \nInteractively investigate your dataset to improve data quality and mitigate fairness and bias issues. \n[Learn more](https://knowyourdata.withgoogle.com/) \n[TF Data Validation](/tfx/guide/tfdv) \nAnalyze and transform data to detect problems and engineer more effective feature sets. \n[Learn more](/tfx/guide/tfdv) \n[Data Cards](https://research.google/static/documents/datasets/crowdsourced-high-quality-colombian-spanish-es-co-multi-speaker-speech-dataset.pdf) \nCreate a transparency report for your dataset. \n[Learn more](https://research.google/static/documents/datasets/crowdsourced-high-quality-colombian-spanish-es-co-multi-speaker-speech-dataset.pdf) \n[Monk Skin Tone Scale (MST)](https://www.skintone.google/) \nA more inclusive skin tone scale, open licensed, to make your data collection and model building needs more robust and inclusive. \n[Learn more](https://www.skintone.google/) \nStep 3\n\nBuild and train model\n---------------------\n\nUse the following tools to train models using privacy-preserving, interpretable techniques, and more. \n[TF Model Remediation](/responsible_ai/model_remediation) \nTrain machine learning models to promote more equitable outcomes. \n[Learn more](/responsible_ai/model_remediation) \n[TF Privacy](/responsible_ai/privacy/guide) \nTrain machine learning models with privacy. \n[Learn more](/responsible_ai/privacy/guide) \n[TF Federated](/federated) \nTrain machine learning models using federated learning techniques. \n[Learn more](/federated) \n[TF Constrained Optimization](https://github.com/google-research/tensorflow_constrained_optimization/blob/master/README.md) \nOptimize inequality-constrained problems. \n[Learn more](https://github.com/google-research/tensorflow_constrained_optimization/blob/master/README.md) \n[TF Lattice](/lattice/overview) \nImplement flexible, controlled, and interpretable lattice-based models. \n[Learn more](/lattice/overview) \nStep 4\n\nEvaluate model\n--------------\n\nDebug, evaluate, and visualize model performance using the following tools. \n[Fairness Indicators](/responsible_ai/fairness_indicators/guide) \nEvaluate commonly-identified fairness metrics for binary and multi-class classifiers. \n[Learn more](/responsible_ai/fairness_indicators/guide) \n[TF Model Analysis](/tfx/model_analysis/install) \nEvaluate models in a distributed manner and compute over different slices of data. \n[Learn more](/tfx/model_analysis/install) \n[What-If Tool](https://pair-code.github.io/what-if-tool/) \nExamine, evaluate, and compare machine learning models. \n[Learn more](https://pair-code.github.io/what-if-tool/) \n[Language Interpretability Tool](https://pair-code.github.io/lit/) \nVisualize and understand NLP models. \n[Learn more](https://pair-code.github.io/lit/) \n[Explainable AI](https://cloud.google.com/explainable-ai) \nDevelop interpretable and inclusive machine learning models. \n[Learn more](https://cloud.google.com/explainable-ai) \n[TF Privacy Tests](https://blog.tensorflow.org/2020/06/introducing-new-privacy-testing-library.html) \nAssess the privacy properties of classification models. \n[Learn more](https://blog.tensorflow.org/2020/06/introducing-new-privacy-testing-library.html) \n[TensorBoard](/tensorboard/get_started) \nMeasure and visualize the machine learning workflow. \n[Learn more](/tensorboard/get_started) \nStep 5\n\nDeploy and monitor\n------------------\n\nUse the following tools to track and communicate about model context and details. \n[Model Card Toolkit](/responsible_ai/model_card_toolkit/guide) \nGenerate model cards with ease using the Model Card toolkit. \n[Learn more](/responsible_ai/model_card_toolkit/guide) \n[ML Metadata](/tfx/guide/mlmd) \nRecord and retrieve metadata associated with ML developer and data scientist workflows. \n[Learn more](/tfx/guide/mlmd) \n[Model Cards](https://modelcards.withgoogle.com/about) \nOrganize the essential facts of machine learning in a structured way. \n[Learn more](https://modelcards.withgoogle.com/about) \n\nCommunity resources\n-------------------\n\nLearn what the community is doing and explore ways to get involved. \n[Crowdsource by Google](https://crowdsource.google.com/) \nHelp Google's products become more inclusive and representative of your language, region and culture. \n[Learn more](https://crowdsource.google.com/) \n[Responsible AI DevPost Challenge](https://responsible-ai.devpost.com/) \nWe asked participants to use TensorFlow 2.2 to build a model or application with Responsible AI principles in mind. Check out the gallery to see the winners and other amazing projects. \n[Learn more](https://responsible-ai.devpost.com/) \n\nResponsible AI with TensorFlow (TF Dev Summit '20) \nIntroducing a framework to think about ML, fairness and privacy. \n\nWatch the video\n\n*close* \n\nExplore Google AI resources to guide your AI/ML journey\n-------------------------------------------------------\n\n[See AI principles](https://ai.google/responsibilities/#our-principles)"]]