Creating an AI-powered Android app has never been easier, thanks to Google’s Gemini API. In this guide, we’ll walk through how to set up and integrate Gemini into an Android app using Android Studio. From setting up your environment, getting your API key, and using a starter pack to customize AI responses — we’ve got you covered.
1. Setting Up a New Project with Gemini
If you have the latest version of Android Studio Koala or higher, you don’t need any additional plugins to start working with Gemini. The starter pack for Gemini AI is included by default.
To create a new project integrated with Gemini:
- Click on “New Project” in Android Studio.
- Choose the Gemini API Starter under AI options.

- Set your project name, package, and other settings.
- Click on “Finish” to let Android Studio generate the default project structure.
The project will come with pre-set Jetpack Compose screens and a basic ViewModel that interacts with Gemini. It’s a great starting point to see the potential of using Gemini for generative content.
2. Acquiring Your API Key for Gemini
To use the Gemini API, you need an API Key. Head over to Google AI Studio:
- Sign in with your Google account.
- Create or select an existing project.
- Click on “Generate API Key“.

- Click on Copy.

Caution: While this quickstart guide focuses on prototyping, for production environments, it’s recommended to use a backend SDK for securely accessing the Gemini API. Embedding your API key in your app may expose it to potential security risks.
This means someone could take your app, decompile it with jadx-gui, and then get access to your API key, putting you in debt 🙂
3. Setting Up Your Android Project to Use Gemini SDK
Add the necessary dependency for the Gemini API SDK in your app-level build.gradle.kts
file:
dependencies {
implementation("com.google.ai.client.generativeai:generativeai:0.7.0")
}
Sync the project to ensure the dependencies are properly installed.
Storing Your API Key Securely:
It’s crucial to keep your API key safe. Store it in a local.properties
file and use the Secrets Gradle Plugin to access it as a build configuration variable:
// Access your API key as a Build Configuration variable
val apiKey = BuildConfig.apiKey
4. Working with the Default Code in the Gemini Starter Pack
The starter pack comes with the default screen using Jetpack Compose and a ViewModel to interact with Gemini. Below is a quick breakdown of how it works:
Jetpack Compose UI: BakingScreen.kt
The screen allows users to:
- Select an image generated by Gemini AI.
- Enter a prompt to customize the output.
- View the generated text response.
The BakingScreen
consists of a LazyRow
for selecting images and a TextField
for inputting a text prompt. The UI state (Loading
, Success
, or Error
) is managed using a ViewModel.
@Composable
fun BakingScreen(
bakingViewModel: BakingViewModel = viewModel()
) {
// UI elements to select images and input prompts
// ...
if (uiState is UiState.Loading) {
CircularProgressIndicator(modifier = Modifier.align(Alignment.CenterHorizontally))
} else {
// Handle Success or Error states with response text
Text(
text = result,
textAlign = TextAlign.Start,
modifier = Modifier.fillMaxSize()
)
}
}
Handling Prompts in BakingViewModel.kt
The ViewModel manages the communication with Gemini AI and updates the UI state accordingly. The main function sendPrompt()
sends an image and text prompt to Gemini using the GenerativeModel class.
fun sendPrompt(
bitmap: Bitmap,
prompt: String
) {
_uiState.value = UiState.Loading
viewModelScope.launch(Dispatchers.IO) {
try {
val response = generativeModel.generateContent(
content {
image(bitmap)
text(prompt)
}
)
response.text?.let { outputContent ->
_uiState.value = UiState.Success(outputContent)
}
} catch (e: Exception) {
_uiState.value = UiState.Error(e.localizedMessage ?: "")
}
}
}
5. Running and Customizing the App
Run the app on your device or emulator. You can input prompts to see how Gemini AI responds. The starter pack gives you a foundation to build more complex interactions, such as generating stories, images, or any creative content.
6. Best Practices and Security Tips
- Remember to keep your API Key secure and avoid exposing it in your codebase.
- For sensitive data processing, consider using Gemini Nano, which runs on-device for offline use and enhanced privacy.
- The API has certain rate limits (e.g., 15 RPM), so plan your usage accordingly.
For further customization and building out more complex AI functionalities, make sure to follow Google’s official documentation and keep exploring new ways to leverage Gemini AI in your Android applications.
Conclusion
Finally, I have to say that integrating Gemini Ai to make an application quickly is a great idea and it is free. But do not share the token you have allocated for your application anywhere. If you have any questions, please let me know in the comments.
In my next article, I will compare Gemini code completion and Copilot code completion.

Follow me!
Did you like this article?
You can subscribe to my newsletter below and get updates about my new articles.