///
1 min read

Google introduces fresh guidelines for Android applications

With the increasing availability of generative AI technology for developers, Android applications are anticipated to incorporate these models to enhance user engagement and overall user experience. However, Google has issued a directive to developers, mandating the inclusion of a mechanism for reporting offensive AI-generated content within their apps.

In an effort to uphold responsible AI practices and ensure the safety of users, Google is implementing new regulations to safeguard AI-generated content. Beginning next year, developers are obliged to integrate a feature that allows users to report or flag offensive AI-generated content without leaving the app. These reports will be instrumental in shaping content filtering and moderation within the apps, similar to the existing in-app reporting system mandated by Google’s User Generated Content policies.

Moreover, applications utilizing AI to generate content must proactively prevent the creation of restricted content, such as material that could facilitate the exploitation or abuse of children, as well as content promoting deceptive behavior.

In a move to bolster user privacy, Google is introducing a policy that limits apps’ access to photos and videos strictly for purposes directly related to the app’s functionality. Apps with sporadic or one-time requirements for accessing these files must use system pickers, like the Android photo picker, for retrieval.

Lastly, Google is imposing stricter regulations on the use of full-screen intent notifications. For apps targeting Android 14, such notifications will be restricted to high-priority use cases, including alarms or receiving phone and video calls. For other notifications, apps will need explicit user permission.

 

Leave a Reply