This week, we’re returning to Google I/O for the 4th year in a row to share how we’re making Firebase better for all app developers, from the smallest one-person startup to the largest enterprise businesses. No matter how many times we take the stage, our mission remains the same: to help mobile and web developers succeed by making it easier to build, improve, and grow your apps. Since launching Firebase as Google’s mobile development platform at I/O 2016, we’ve been continuously amazed at what you’ve built with our tools. It is an honor to help you on your journey to change the world!
For example, in Uganda, a start-up called Teheca is using Firebase to reduce the mortality rate of infants and new mothers by connecting parents with nurses for post-natal care. Over in India where smartphones are quickly replacing TVs as the primary entertainment source, Hotstar, India’s largest video streaming app, is using Firebase with BigQuery to transform the viewing experience by making it more social and interactive. Here’s how they’re doing it, in their own words:
Stories like these inspire us to keep making Firebase better. In fact, we’ve released over 100 new features and improvements over the last 6 months! Read on to learn about our biggest announcements at Google I/O 2019.
New translation, object detection and tracking, and AutoML capabilities in ML Kit
Last year, we launched ML Kit, bringing Google's machine learning expertise to mobile developers in a powerful, yet easy-to-use package. It came with a set of ready-to-use on-device and cloud-based APIs with support for custom models, so you could apply the power of machine learning to your app, regardless of your familiarity with ML. Over the past few months, we’ve expanded on these by adding solutions for Natural Language Processing, such as Language Identification and Smart Reply APIs. Now, we’re launching three more capabilities in beta: On-device Translation API, Object Detection & Tracking API, and AutoML Vision Edge.
The On-device Translation API allows you to use the same offline models that support Google Translate to provide fast, dynamic translation of text in your app into 58 languages. The Object Detection & Tracking API lets your app locate and track, in real-time, the most prominent object in a live camera feed. With AutoML Vision Edge, you can easily create custom image classification models tailored to your needs. For example, you may want your app to be able to identify different types of food, or distinguish between species of animals. Whatever your need, just upload your training data to the Firebase console and you can use Google’s AutoML technology to build a custom TensorFlow Lite model for you to run locally on your user's device. And if you find that collecting training datasets is hard, you can use our open source app which makes the process simpler and more collaborative.
Customers like IKEA, Fishbrain, and Lose It! are already using ML Kit’s capabilities to enhance their app experiences. Here’s what they had to say:
"We’re working with Google Cloud to create a new mobile experience that enables customers, wherever they are, to take photos of home furnishing and household items and quickly find that product or similar in our online catalogue. The Cloud Vision Product Search API provided IKEA a fast and easy way to index our catalogue, while ML Kit’s Object Detection and Tracking API let us seamlessly implement the feature on a live viewfinder on our app. Google Cloud helps us make use of Vision Product Search and we are very excited to explore how this can help us create a better and more convenient experience for our customers.” - Susan Standiford, Chief Technology Officer of Ingka Group, a strategic partner in the IKEA franchise system and operating IKEA in 30 markets.
“Our users are passionate about fishing, so capturing and having access to images of catches and species information is central to their experience. Through AutoML Vision Edge, we’ve increased the number of catches logged with species information by 30%, and increased our species recognition model accuracy from 78% to 88%..”- Dimitris Lachanas, Android Engineering Manager at Fishbrain
“Through AutoML Vision Edge, we were able to create a highly predictive, on-device model from scratch. With this improvement to our state-of-the-art food recognition algorithm, Snap It, we’ve increased the number of food categories our customers can classify in images by 21% while reducing our error rate by 36%, which is huge for our customers.” - Will Lowe Ph.D., Director of Data Science & AI, Lose It!
Performance Monitoring now supports web apps
Native mobile developers have loved using Firebase Performance Monitoring to find out what parts of their app are running slower than they expect, and for which app users. Today, we’re excited to announce that Performance Monitoring is available for web apps too, in beta, so web developers can understand how real users are experiencing their app in the wild.
By pasting a few lines of code to their site, the Performance Monitoring dashboard will track and visualize high level web metrics (like page load and network stats) as well as more granular metrics (like time to first paint and first input delay) across user segments. The Performance Monitoring dashboard will also give you the ability to drill down into these different user segments by country, browser, and more. Now, you can get deep insight into the speed and performance of your web apps and fix issues fast to ensure your end users have a consistently great experience. By adding web support to one of our most popular tools, we’re reaffirming our commitment to make app development easier for both mobile and web developers.
Brand new audience builder in Google Analytics for Firebase
Google Analytics for Firebase provides free, unlimited, and robust analytics so you can measure the things that matter in your app and understand your users. A few weeks ago, we announced advanced filtering in Google Analytics for Firebase, which allows you to filter your Analytics event reports by any number of different user properties or audiences at the same time.
Today, we’re thrilled to share that we’ve completely rebuilt our audience system from scratch with a new interface. This new audience builder includes new features like sequences, scoping, time windows, membership duration, and more to enable you to create dynamic, precise, and fresh audiences for personalization (through Remote Config) or re-engagement (through Cloud Messaging and/or the new App campaigns).
For example, if you wanted to create a "Coupon users" audience based on people who redeem a coupon code within your app, and then complete an in-app purchase within 20 minutes, this is now possible with the new audience builder.
In addition to the three big announcements above, we’ve also made the following improvements to other parts of Firebase.
Support for collection group queries in Cloud Firestore
In January, we graduated Cloud Firestore - our fully-managed NoSQL database - out of beta into general availability with lower pricing tiers and new locations. Now, we’ve added support for Collection Group queries. This allows you to search for fields across all collections of the same name, no matter where they are in the database. For example, imagine you had a music app which stored its data like so:
This data structure makes it easy to query the songs by a given artist. But until today, it was impossible to query across artists — such as finding the longest songs regardless of who wrote them. With collection group queries, Cloud Firestore now can perform these searches across all song documents, even though they're in different collections. This means it’s easier to organize your data hierarchically, while still being able to search for the documents you want.
Cloud Functions emulator
We’ve also been steadily improving our tools and emulator suite to increase your productivity for local app development and testing. In particular, we’re releasing a brand new Cloud Functions emulator that can also communicate with the Cloud Firestore emulator. So if you want to build a function that triggers upon a Firestore document update and writes data back to the database you can code and test that entire flow locally on your laptop, for much faster development.
Configurable velocity alerts in Crashlytics
Firebase Crashlytics helps you track, prioritize, and solve stability issues that erode app quality, in real time. One of the most important alerts within Crashlytics is the velocity alert, which notifies you when an issue suddenly increases in severity and impacts a significant percentage of your users. However, we recognize that every app is unique and the one-size-fits-all alerting threshold might not be what’s best for you and your business. That’s why you can now customize velocity alerts and determine how often and when you want to be alerted about changes to your app’s stability. We’re also happy to announce that we’ve expanded Crashlytics to include Unity and NDK support.
Improvements to Test Lab
Firebase Test Lab makes it easy for you to test your app on real, physical devices, straight from your CLI or the Firebase console. Over the past few months, we’ve released a number of improvements to Test Lab. We’ve expanded the types of apps you can run tests on by adding support for Wear OS by Google and Android App Bundles. We’ve also added ML vision to Test Lab’s monkey action feature so we can more intelligently simulate where users will tap in your app or game. Lastly, we’ve made your tests more reliable with test partitioning, flaky test detection, and the robo action timeline, which tells you exactly what the crawler was doing while the test was running.
Greater control over Firebase project permissions
Security and data privacy remain part of our top priorities. We want to make sure you have control over who can access your Firebase projects, which is why we’ve leveraged Google Cloud Platform’s Identity & Access Management controls to give you finer grained permission controls. Right from the Firebase console, you can control who has access to which parts of your Firebase project. For example, you can grant access to a subset of tools so team members who run notification campaigns aren’t able to change your Firebase database’s security rules. You can go even further and use the GCP console to create custom roles permitting access to only the actions your team members are required to take.
More open-sourced SDKs
To make Firebase more usable and extensible, we’re continuing to open source our SDKs and accepting contributions from the community. We are committed to giving you transparency and flexibility with the code you integrate into your mobile and web apps. Most recently, we open sourced our C++ SDK.
In case you missed the news at Cloud Next 2019, here’s a quick recap of the updates we unveiled back in April:
In addition to making Firebase more powerful, we’ve also been hard at work bringing the best of Fabric into Firebase. We know many of you have been waiting for more information on this front, so we have outlined our journey in more detail here.
We’re continuing to invest in Firebase and as always, we welcome your feedback! With every improvement to Firebase, we aim to simplify your app development workflows and infrastructure needs, so you can stay focused on building amazing user experiences. To get a sneak peek at what’s next, join our Alpha program and help us shape the future