We at the Firebase office all enjoyed playing with Hanley Weng's "CoreML-in-ARKit" project. It displays 3D labels on top of images it detects in the scene. While the on-device detection provides a fast response, we wanted to build a solution that gave you the speed of the on-device model with the accuracy you can get from a cloud-based solution. Well, that's exactly what we built with our MLKit-ARKit project. Read on to find out more about how we did it!
This image takes a while to load, but it’s worth it.
ML Kit for Firebase is a mobile SDK that enables developers to bring Google's machine learning (ML) expertise to their Android and iOS apps. It includes easy-to-use on-device and cloud-based Base APIs and also offers the ability to bring your own custom TFLite models.
ARKit is Apple's framework that combines device motion tracking, camera scene capture, advanced scene processing, and display conveniences to simplify the task of building an AR experience. You can use these technologies to create many kinds of AR experiences using either the back camera or front camera of an iOS device.
In this project we are pushing ARKit frames from the back camera into a queue. ML Kit processes these to find out the objects in that frame.
When the user taps the screen, ML Kit returns the detected label with the highest confidence. We then create a 3D bubble text and add it into the user's scene.
ML Kit makes ML easy for all mobile developers, whether you have experience in ML or are new to the space. For those with more advanced use cases, ML Kit allows you to bring your own TFLite models, but for more common use cases, you can implement one of the easy-to-use Base APIs. These APIs cover use cases such as text recognition, image labeling, face detection and more. We'll be using image labeling in our example.
Base APIs are available in two flavors: On-device and cloud-based. The on-device APIs are free to use and run locally, while the cloud-based ones provide higher accuracy and more precise responses. Cloud-based Vision APIs are free for the first 1000/API calls and paid after that. They provide the power of full-sized models from Google's Cloud Vision APIs.
We are using the ML Kit on-device image labeling API to get a live feed of results while keeping our frame rate steady at 60fps. When the user taps the screen we fire up an async call to the Cloud image labeling API with the current image. When we get a response from this higher accuracy model, we update the 3D label on the fly. So while we are continuously running the on-device API and using its result as the initial source of information, the higher accuracy Cloud API is called on-demand and its results replaces on-device label eventually.
Which result to show?
While the on-device API is real-time with all the processing happening locally, the Cloud Vision API makes a network request to the Google Cloud backend, leveraging a larger, higher accuracy model. Once the response arrives, we replace the label provided by the on-device API with the result from Cloud Vision API.
1. Clone the project
$ git clone https://github.com/FirebaseExtended/MLKit-ARKit.git
2. Install the pods and open the .xcworkspace file to see the project in Xcode.
$ cd MLKit-ARKit
$ pod install --repo-update
$ open MLKit-ARKit.xcworkspace
GoogleService-Info.plist
Info.plist
At this point, the app should work using the on-device recognition.
★ The cloud label detection feature is still free for first 1000 uses per month. Click here to see additional pricing details.
At this point, the app should update labels with more precise results from the Cloud Vision API.
Firebase launched over six and a half years ago as a database, but since then we've grown into a platform of eighteen (18!!) products. And over the last year we've announced a number of new features to help you build better apps and grow your business. We also infused Firebase with more machine learning super-power, so you can make your apps smarter, and matured the platform, so Firebase works better for developers at large, sophisticated enterprises.
Since the end of the year is a great time for top-ten lists, we were going to cap off the year with our own "Top Ten List of Firebase launches." But, then, we realized we had more than ten launches we wanted to talk about, and we really don't like playing favorites. So instead, here's our "Thirteen Firebase Launches In No Particular Order Because They're All Great In Their Own Way" list for 2018. Enjoy!
At Google I/O, we launched one of our most exciting features of 2018: ML Kit for Firebase, a machine learning SDK for Android and iOS. ML Kit lets you add the power of machine learning to your app, without needing an advanced degree in neutral networks. It provides a number of out-of-the-box solutions for performing tasks like recognizing text in images, labeling objects in photos, or detecting faces. And it will also let you use custom models, for those of you who are into building your own. (Bespoke artisanal neural networks are big among hipster data scientists these days.)
Notifications are a great way to get latent users back into your app, but how do you communicate with active users who are actively using your app? In 2018, we launched Firebase In-App Messaging to help you send targeted and contextual message to users who are actively using your app. In-app messages are a great way to encourage app exploration and discovery, and guide users towards discovering new features in your product, or working their way towards that important conversion event.
At Firebase, we're big fans of building scripts to make our lives easier; whether that's to automate common tasks, or to perform custom logic. To help with that goal, we launched three new REST APIs that you can use to automate your life (at least from a Firebase perspective). The Firebase Management API is great for automating tasks like creating new projects, the Remote Config REST API can be useful for customizing the way you update Remote Config values, and the Firebase Hosting API can be used to automatically upload certain files to your site.
Recently, StackBlitz and Glitch used the Management API to build integrations that allow you to deploy projects directly to Firebase Hosting. Start a project, write some code, click a few buttons, and voila! You've deployed your Firebase project to the web!
Good performance is one of the key factors for creating a great user experience. Firebase Performance Monitoring automatically collects performance metrics where it matters the most: the real world.
This year, Performance Monitoring graduated from beta into general availability. Along the way, we added helpful new features like an issue feed in the dashboard to highlight important performance problems your users are encountering. We've also added session view support for network class and traces, which lets you dig deeper into an individual session of a trace, so you see attributes and events that happened leading up to a performance issue.
We also released Firebase Predictions into GA. Predictions uses machine learning to intelligently segment users based on their predicted future behavior. Along the way, we added health indicators and evaluation criteria to every prediction, so you can better understand how reliable a prediction is, as well as the data being used to make it. We also integrated Predictions with BigQuery, so you have more control over your data.
Getting started with Predictions is as easy as flipping a switch in the console. We predict you're going to love it! (Sorry.)
The general availability party keeps on going! Cloud Functions hit GA and we also released a new version of the SDK. The new SDK adds "callable" functions that make it much easier to call server functions from the client, especially if your function requires authentication.
Cloud Functions also released a brand new library, firebase-functions-test, to simplify unit testing functions. This library takes care of the necessary setup and teardown, allowing easy mocking of test data. So in addition to simple standalone tests, you can now write tests that interact with a development Firebase project and observe the success of actions like database writes.
firebase-functions-test
Firebase Test Lab went cross-platform in 2018 by adding support for iOS. Now you can write and run tests on real iOS devices running in our data centers. Test Lab supports ten models of iPhones and iPads running seven different versions of iOS, including iOS 12.
Test Lab also launched a number of improvements to Robo, a tool which runs fully automated tests on Android devices. Testing games is now easier, thanks to 'monkey actions' (which can randomly click on your screen), and game loops (which perform pre-scripted actions). You can also customize Robo better now, in case you need to sign-in at the start of your app or add intelligent text to a search field.
Continuing the theme of testing, in 2018, we launched emulators for Firestore and the Realtime Database, so you can more easily unit test your security rules and incorporate them into a continuous integration environment. These emulators run locally and allow you to test your security rules offline so you can be confident before deploying to production. We also created a testing library that simplifies your test code.
From the beginning, Cloud Functions has tightly integrated important usage metrics with Stackdriver, Google Cloud's powerful monitoring service. To deepen our integration further, we linked the Realtime Database with Stackdriver. You can now see even more metrics than the Firebase console provides, such as load broken down by operation type and information about your downloaded bytes.
The real power of this integration is to set up alerts on metrics or errors so you can detect and respond to issues before your customers notice them.
Sometimes the reporting dashboards in the Firebase console don't give you the level of granularity or specific data slice that you need. That's where BigQuery - Google Cloud's data warehouse - and Data Studio - Google Cloud's data visualization tool - come into play.
We've given you the ability to export your Analytics data to BigQuery for a while now. This year, we added integrations with Predictions and Crashlytics, so you can export even more of your Firebase data into one central warehouse. Learn more about using Firebase and BigQuery together here.
Cloud Firestore is our next generation database with many of the features you've come to love from the Realtime Database, combined with the scale and sophistication of the Google Cloud Platform. Over the course of 2018, we've launched a number of improvements to Firestore, to make it better suited for complex enterprises.
We also added some nice features along the way -- we expanded offline support for the web SDK from one browser tab to multiple. We've added better support for searching documents by the contents of their arrays. And we added multiple new locations where you can store your Firestore data: Frankfurt, Germany and South Carolina, USA. (We'll be adding even more locations in 2019.)
The Firebase console is a crucial part of the Firebase workflow for just about any team. We spent a lot of time in 2018 making the console better than ever. Here's a few things we added:
These features make you more productive and confident in your app's security and performance. We can't wait to add more to the console in 2019!
For a while now, we've been hearing from some of you that you'd like an option to get enterprise-grade support for Firebase. To address that request, we added support for Firebase to our Google Cloud Platform (GCP) support packages, available in beta right now.
If you already have a paid GCP support package, our beta will let you get your Firebase questions answered through the GCP support channel - at no additional charge. When this new support graduates to general availability, it will include target response times, technical account management (for enterprise tier), and more. You can learn more about GCP support here.
If you're planning to stick with Firebase's free support, don't worry - we don't plan to change anything about our existing support model. Please continue to reach out to our friendly support team for help as needed!
It's been a great year, so we're going to take a little time with friends and family before we hit the ground running in January. However you celebrate the end of your year, we hope your December is full of happiness and relaxation. And if it happens to be full of building mobile or web apps, we hope you use Firebase! Happy building!
Hi, there, Firebase developers! We wanted to let you know about some important changes coming your way from Google Analytics for Firebase that will affect how we help you measure user engagement and sessions. This might also affect any BigQuery queries you might have written, so let's get right into the changes, shall we?
Up until now, sessions were measured using the following formula:
session_start
pseudo_user_id
session_started
With the latest version of the Firebase SDK, we're going to be changing how a session is measured. Specifically:
extend_session
ga_session_id
ga_session_number
In the Firebase console, the biggest change you'll notice is that your app will have more sessions, because we'll be counting instances where users interact with your app for less than ten seconds. This also means that any kind of "average some_event per session" stat will decrease, since the number of sessions is going up.
On the BigQuery side of things, these new event parameters will make your life a whole lot easier. Analyzing anything by session should be really straightforward now -- you just need to group them by ga_session_id. So calculating your own "average xxx per session" values will be a lot easier in BigQuery.
For example, here's a query where we calculate how many level_complete_quickplay events an average user generates per session:
level_complete_quickplay
SELECT AVG(total_quickplays) as average_quickplays_per_session FROM ( SELECT COUNT(event_name) as total_quickplays, (SELECT value.string_value FROM UNNEST (event_params) WHERE key = "ga_session_id") as session_id FROM `firebase-public-project.analytics_153293282.events_xxxxxxxx` WHERE event_name = "level_complete_quickplay" GROUP BY session_id HAVING session_id IS NOT NULL )
And if you want to figure out, say, how many sessions it typically takes before somebody makes a purchase, you can do that by analyzing the ga_session_number parameter.
In the past, Firebase measured total user engagement by recording the amount of time your user spent with the app in the foreground and then sending down those values (as incremental measurements) as user_engagement events. You could then calculate the total amount of time a user spent within your app by adding up the values of the engagement_time_msec parameter that were sent with each of these events.
user_engagement
engagement_time_msec
These user_engagement events were typically sent when a user a) Sent your app into the background, b) Switched screens, c) Crashed, or d) Used your app for an hour. As a result, it was very common to see user_engagement events sent alongside events like app_exception or screen_view events. To the point where we asked ourselves, "Why are we sending down all these extra events? Why not just send engagement time as a parameter with these other events we're already generating?"
app_exception
screen_view
And so that's exactly what we're going to do, starting in early 2019. You will still occasionally see separate user_engagement events, but you will also start seeing engagement_time_msec parameters added to other events automatically generated by Google Analytics for Firebase. We're going to start with screen_view, first_open and app_exception events, but you might see them added to other events in the future.
screen_view, first_open
On the Firebase console, nothing should change. Your app might end up using a little less data, since you're no longer sending down so many separate user_engagement events, but otherwise, nothing else should look different.
On the BigQuery side of things, you'll need to alter your queries slightly if you were calculating important metrics by filtering for user_engagement events. If you were, you'll need to alter those queries by looking for events that contain an engagement_time_msec parameter.
For example, here's a query that calculates the total user_engagement time for each user by summing up the engagement_time_msec parameter for user_engagement events. This might work today, but it will be inaccurate in the future.
SELECT SUM(engagement_time) AS total_user_engagement FROM ( SELECT user_pseudo_id, (SELECT value.int_value FROM UNNEST(event_params) WHERE key = "engagement_time_msec") AS engagement_time FROM `firebase-public-project.analytics_153293282.events_20181003` WHERE event_name = "user_engagement" ) GROUP BY user_pseudo_id
So here's that same query, modified to look for all events that might have a engagement_time_msec parameter
SELECT SUM(engagement_time) AS total_user_engagement FROM ( SELECT user_pseudo_id, (SELECT value.int_value FROM UNNEST(event_params) WHERE key = "engagement_time_msec") AS engagement_time FROM `firebase-public-project.analytics_153293282.events_20181003` ) WHERE engagement_time > 0 GROUP BY user_pseudo_id
The nice thing about that second query is that it works both with the old way of measuring user engagement and the new one, so you can modify your BigQuery queries today, and everything will still work just fine when the new changes go into effect.
Update: Well, it took a little longer than planned, but this feature launched in April of 2020. If you've been using this second BigQuery query all along, then congratulations! Everything should continue working as before. If not, well, there's no better time to switch over.
We hope that these changes make your life a little easier in the long run, and offer only a minimal amount of disruption in the short term. In the meantime, if you have any questions, feel free to reach out on StackOverflow, or any of our official support forums.
Happy analyzing!
As we build Crashlytics and talk to our developers, we've found that the way they use our dashboards is often nuanced and specific to their team. We've done our best to incorporate the themes we hear most often into the dashboard you see in the Firebase console, but one dashboard solution simply isn't enough.
That's why we launched the Crashlytics integration with BigQuery, giving you the freedom to deeply explore your data. And, using Data Studio (a free tool that sits on top of BigQuery), you can make custom dashboards from your Crashlytics data that fit the unique way your team works. Data Studio allows your team members who aren't comfortable with SQL to easily work with the BigQuery data set. Data Studio dashboards are also easy to collaborate on and share, so your team can work more efficiently.
Today, we're launching a Data Studio template that gives you a preview of what's possible with Crashlytics and BigQuery. Let's take a closer look at the template.
The overview section of our template provides an overview of which OS versions crash the most, which devices crash the most, and how it's trending over time. You can customize each section to display the results of the exact queries you want and display how you need based on your business logic. If you want to keep an eye on the deprecation of an old operating system you can change or filter directly in the queries that back the dashboard.
Up until now, exploring your crash reports by custom metadata like Experiment ID or an Analytics breadcrumb has been limited, making it tough to identify which variant in an experiment is least stable or which level in a game has the most crashes. Now, when you export your data to BigQuery, it's easy to run any deep analysis you want, and then visualize your report with Data Studio or any other business system you use.
As an example, say that you set up your Android game so that you log what level a crash occurs with:
Crashlytics.setInt("current_level", 3);
Now you can filter by the presence of a key and its values. We've created a sample dashboard for filtering these in our Data Studio template.
We know our largest apps have different teams that specialize in specific areas of the code. For that, we've made it easy to filter by specific files in our Data Studio template.
Our data studio template is totally customizable, meaning if you'd rather filter on a different part of our scheme or make more advanced tools, you can easily adapt based on your needs. You can adjust the template using the Data Studio UI or you can edit the backing BigQuery queries.
Your team can all work together by sharing the dashboard in DataStudio. This means team members don't need to learn SQL to get the benefits of the Crashlytics integration with BigQuery.
You'll also be able to select date ranges longer than 90 days, if you set up retention in BigQuery. The Crashlytics dashboard currently retains data for 90 days. With BigQuery you own the retention and deletion policies, making it much simpler for your team to track year-over-year trends in stability data. This means you'll be able to customize your dashboard to display data over the exact period you are interested in.
With just a few clicks from the Firebase Crashlytics dashboard you can enable daily exports of all raw crash data on a per-app or per-project basis. This includes your stack traces, logs, keys and any other crash data. You can also use the new BigQuery sandbox to get started for free.
Once you link to Crashlytics to BigQuery, follow these instructions to connect this template with your Crashlytics dataset.
If you are a current Fabric user, you can gain access to BigQuery export and all the other features of Firebase by linking your app in the Fabric dashboard. Check out this link for details and documentation.
We hope this improvement makes it even easier for you to dig into your crash reporting data and efficiently debug your app! As always, if you have any questions, you can find us on Twitter (@firebase) and on Stack Overflow. Happy debugging!
No one likes litter, so why do we live with it? Litter directly impacts a community's health, safety, and economic potential. At Rubbish, we believe people should love where they live. That's why we created the Rubbish app, which empowers neighborhoods to tackle litter at the local level by photographing and reporting litter, sharing and analyzing the data, and engaging community partners to clean up together. Our mission is to build stronger, healthier communities with less trash, more beautiful streets, and happier residents and we can't do it without Firebase.
Here's a quick video of how Rubbish works
The concept for Rubbish resulted from a moment of panic and frustration: while we (Elena and Emin, co-founders of Rubbish) were walking the streets of New York City with Elena's dog Larsen, he choked on a chicken bone. Luckily, he was ok, but the two of us were not. Why was litter an unfortunate part of city living, with no effective solution to address it?
This is Larsen. He's a good boy.
We decided to tackle this issue and find an innovative solution together. We started to document litter daily, taking pictures and noting problem areas in our communities, which quickly accumulated into thousands of photos sitting in a stagnant shared album. We needed a better way to store and organize the information we were collecting so we could use it to make a difference. We also needed a way to share the photos and their metadata with several audiences (governments, community partners) and on several social media channels through our app. Each platform had its own set of requirements and specifications, and the idea of creating the infrastructure to accomplish this was daunting, until we discovered Firebase.
To combat the litter problem and make real change, we needed a quick, seamless way to gather, process, and share all the information surrounding each documented piece of trash.
We evaluated lots of options, but Firebase stood out because it provided a comprehensive set of tools that allowed us to quickly build the backend infrastructure of the Rubbish app and address the challenges of storage, data validation, processing, and distribution.
For example, we faced the challenge of quickly storing and tracking user-generated photos. Cloud Storage and Firestore allow us to keep track of what is being reported and where. Another challenge was verifying user submissions, especially ones requiring priority attention from third parties, like reports that need local agency involvement. With the help of Cloud Functions for Firebase, we set up a dashboard to summarize the data and generate reports in one place. We also instrumented Cloud Functions to act as a safety net and help us with quality control. For instance, before reports are automatically formatted and sent to local government agencies like San Francisco 311 for follow-up, the functions check that the submissions came from validated users with good track records, and are in the correct vicinity of the agency. We use Cloud Functions to trigger a validation review via our backend and via email whenever a photo is uploaded. Then, a member of our team evaluates the uploaded image to make sure it's clear and relevant. This makes an otherwise complicated process easy and automated.
Additionally, we use Firebase Authentication and Security Rules to ensure that only the intended information gets shared, and to protect each user's privacy and security. Firebase allows us to seamlessly integrate our data with APIs from local governments, social networks, and our own app in a few lines of code. With Firebase, Rubbish can effectively store, share, and process the data to create real insights and impact. In addition to Firebase, we also use some of Google Cloud Platform's APIs, such as the Google Sheets API, Maps SDK for iOS, Places API, Geocoding API, and Cloud Runtime Configuration API.
Firebase-powered dashboard that allows us to manage user submissions.
This is one of our dashboards for tracking neighborhood trends.
As we grew our software development team, we were concerned about the time and resources it takes a new team member to get up to speed and become productive. Firebase provided easy onboarding of new members with user-friendly training resources, like robust sample projects, fun developer videos, straightforward technical documentation, and more. In fact, our new engineers are onboarded and ready to contribute three times faster, saving us significant time and resources that can now be focused elsewhere. We reduced development time on new features, as well as the time needed for maintenance, security handling, and developer onboarding, which maximizes our productivity.
In short, Firebase enables start-up teams like us to communicate effectively, share information, and grow. It's a huge value for us that Firebase allowed us to effectively engage such a variety of talented, passionate individuals.
Our team and their favorite Firebase product or their favorite snack.
Since Firebase covers the backend infrastructure behind the app and facilitates collaboration on our team, we can focus on expanding our field testing and cultivating relationships with important partners. We launched a pilot program on San Francisco's Polk Street in August 2018, working with the community to sponsor resident-led street cleanings. We use the data we collect to inform local sponsors and residents about the progress, including summaries of the number and types of trash collected - all that wouldn't be possible without Firebase.
We've also been collaborating with the San Francisco Community Benefit Districts and the local San Francisco government to optimize and track improvements through Rubbish. For example, we pinpointed the largest source of cigarette butts (customers at bars and restaurants) and worked with these businesses to install cigarette receptacles. We're excited to find even deeper trends and new ways to analyze and address the litter problem.
As Rubbish continues to map and track litter, we are finding that trash patterns on the street can be as dynamic as traffic patterns. Local events, the weather, and time of day all play a role in determining what your street will look like when you step out for your morning walk. The data we collect is providing insight into important trends like these and is being used to help local communities sponsor and track clean-up efforts in a meaningful way. By relying on Firebase to store, process, and analyze an increasing amount of data, we feel confident that we can engage and empower individuals, communities, and governments to tackle extensive, seemingly unsolvable problems like litter.
Firebase Performance Monitoring provides detailed insights into how your app performs in the hands of real users, giving visibility into bottlenecks that could be causing churn and revenue loss.
We've received positive feedback about the richness of performance data Firebase Performance Monitoring surfaces. However, a common complaint has been that it's difficult to determine the cause of these issues from the data we surface, forcing developers to spend a lot of time investigating the root cause of performance issues.
For example, it's great to know that your app launches slowly for 40% of your users, but why is this happening? Even though developers can use attributes such as app version, OS version, and geography to filter data in the dashboard, the data still may not give enough detail to pinpoint the exact issue at hand.
To address the need for actionable insights, we are pleased to launch the ability to dig deeper into an individual session of a trace, so you see attributes and events that happened leading up to a performance issue. With this feature, developers can see three new categories of information:
Surfacing these extra details in the context of a trace will help improve debuggability and issue resolution for performance issues.
Let's see how sessions works with a concrete example. Imagine you are an e-retail app developer using a custom trace, productImageLoading, to measure how long it takes to load an image of an item in your catalogue. You notice that an issue appears in the Firebase Performance Monitoring console for this trace because the product images are loading slower than the defined threshold of 200ms.
Performance Monitoring surfaces emerging issues.
Previously, if you clicked on the issue to get more details, you would see information like the median time for the trace, and you could slice the data by various segmentations, such as country, device, etc.
The issue details page shows more information and allows for data slicing
While the details page is helpful, it shows the information aggregated among all trace samples, which doesn't give enough context about other factors that may have contributed to the issue.
This is where drilling down into a session of a trace becomes powerful. With this new feature, you can now examine device properties, system usage, traces and network requests that happened around the same time as the specific trace instance being investigated. You can access all sessions for the metric from the top bar of the metrics detail page. If you're already segmenting your data by an attribute like app version or country, then you can click through to a pre-filtered view of sessions.
Sessions has 2 entry points
In the sessions view, you can narrow down sessions corresponding to a particular percentile range of the trace duration and look at the details of the trace instances for that range. The percentile range groups sessions into cohorts based on their performance making it easier to find the sessions with the worst performance:
Sessions view showing CPU, memory, traces and network requests for a percentile range
Looking at a product image loading session in the 90-95 percentile range, you can see the following:
Based on the above trace session data, you can see that requesting a large image impacted memory and CPU, and subsequently slowed down loading of the product image. This helps you pinpoint where in your code to investigate the issue further.
This is just an example of the powerful debuggability that comes with the new feature. We hope that developers are able to use this new feature in a myriad of use cases to bridge the gap between cause and effect and greatly reduce time spent debugging trace issues.
To get started on iOS or Android, please see our docs here. If you have any feedback, feel free to share with us through our support channel. Happy building!
Notifications are one of the most powerful ways of bringing latent users back to your app. Properly timed and targeted notifications can be vital in increasing engagement. That's why we've redesigned the Firebase notifications dashboard to support much more sophisticated and powerful notification campaigns.
The old notifications dashboard let you set up notification campaigns as one-time alerts that could go out immediately, or be scheduled for a later date. For example, with a few clicks, you could set up a notification campaign that would remind new users who failed to complete onboarding to do so on Monday. However, it was not possible to automate this reminder to go out every Monday - unless you did it manually.
In the new Firebase notifications dashboard, we have added the ability to create recurring campaigns. Recurring campaigns are notification campaigns that run automatically, whenever a user meets the targeting conditions. Now, it's easy to set-up that weekly reminder to encourage new users to complete onboarding. Or, perhaps you want to offer bi-weekly discounts on in-app purchases to spenders to nudge them towards a purchase - that's also possible!
The new notifications dashboard allows you to set user-level message frequency caps, so you can limit the number of times a user gets a message to prevent spamming them. You can limit messages to only be sent once per user, or allow one message over a specified number of days.
Perhaps you want to send a welcome message once to each new user. Use a single message to target every new user once. Or, perhaps you want to encourage users to check out a tutorial on how to use the app. You can send a notification once every few days to guide them towards the action until it's been completed. And since targeted segments are dynamic, users who meet the criteria will automatically start receiving notifications, and users who no longer meet the criteria will stop receiving the targeted notifications. This means your notification will only be delivered to users who find it relevant.
Users can receive a notification just once
Users can receive notifications at a custom interval
Untargeted batch and blast notifications are annoying and can cause churn. It's vital to carefully segment the right users so your notification appears welcome and relevant to their recent interaction with your app - not out of place and random. The new notifications dashboard includes a more sophisticated segment builder that gives you the ability to target prevalent user characteristics, like last app engagement and the number of days since they first opened the app. This targeting is built into the dashboard, so you don't have to add any code to get these new parameters.
Finally, we also improved the results section of the notifications dashboard so you can better monitor the performance of your campaigns and make adjustments as needed. In the new notifications dashboard, you can now track the effectiveness of recurring campaigns day-by-day. Here, you can see daily data points for notification sends, opens, and conversions. You'll also notice that the graphs have been updated from a bar chart to a time-series graph, which are more intuitive and easier to interpret.
The redesigned Firebase notifications dashboard offers new, powerful campaign options, sophisticated targeting, and rich analytics to track the progress of your notifications campaigns. If you're new to Firebase notifications, get started with the Firebase Cloud Messaging guides.
Check out the Firebase console to set up your notification campaigns today!
Last year at Firebase Summit, we introduced you to Predictions, a machine learning product that helps you smartly segment your users based on their predicted future behavior. Without requiring anyone on your app team to have ML expertise, Predictions gives you insight into which segments of users are likely to churn or spend (or complete another conversion event) so you can make informed product decisions and grow your app.
As of today, Predictions makes more than 6 billion predictions per day for our developers and allows them to take meaningful actions by making predictive segments available for targeting in Remote Config, Cloud Messaging, In-App Messaging, and A/B Testing.
This year at Firebase Summit, we announced that Predictions has graduated out of beta and into general availability with a host of new features that we added based on your feedback.
Since Predictions continuously update based on actual user behavior inside your app, we heard from many of you that you wanted to know how stable a prediction was before you integrate it into your app.
To help answer this question, we created a health indicator at the bottom of each predictive segment card that gives you a snapshot of how a certain prediction is performing:
Image 1: Green means it has been performing consistently well over the last two weeks
Image 2: Yellow means it is performing well today but did not meet the quality threshold some time in the past two weeks
Image 3: Red means it is not performing well today and had other performance issues over the last two weeks
It is worth mentioning that actions targeted with Predictions have a fail-safe mechanism, so if a predictive segment is performing poorly, it simply turns inactive. That means, if you are using Remote Config to deliver a set of values to users in that predicted group, Remote Config will gracefully fall back to your default values if the predictive segment decreases in reliability. Any notifications or in-app messages directed at that predictive segment will also not trigger until the predictive segment increases in accuracy.
To help you understand how we assess the quality of a prediction, we are now exposing our evaluation criteria. For every predictive segment, we use a portion of your historical data from the last 28 days that we hold out during the model training phase.
We then compare the results of the prediction to what actually happened. This gives us two ways to score the prediction: how many of the users in the predictive segment actually behaved in the predicted way (we call that true positive rate) and how many users in the predictive segment were incorrectly classified (or in more technical terms, the false positive rate).
You can access this data from the bottom of the prediction card
Tapping on the health indicator exposes these values.
By exposing these two scores to you, you can now make a better determination about which risk profile to choose for your action.
Another common question we received during our beta phase is what went into creating a predictive segment. We now offer a details page that gives you the ingredient list! You can click through and see what data our model makes use of. This includes event frequency, volume, and parameters as well as other data like device language, freshness of app install and more.
The last thing we are excited to announce is that now, you can export your raw predictions data into BigQuery. This will give you access to the raw prediction score, the thresholds we used for each risk profile, as well as the final result. You can use this data to create your own risk profiles or if you supply your own user_id property in analytics, to do sophisticated analysis with your analytics data. For example, you can find out which countries exhibit the highest potential to churn or spend!
We are humbled to have gained your trust over the past year and hope these improvements make it easier for you to make the most out of Predictions in your mobile apps and games. As always, if you have any questions, you can find us on Twitter (@firebase) and on Stack Overflow.
For more information on these updates, check out our docs below!
Predictions risk tolerance and performance
Predictions model inputs and details page
Predictions data export to BigQuery
When Test Lab was originally launched with Firebase in 2016, it supported only Android devices. At Google I/O 2018 in May, Test Lab launched closed beta support for iOS. This included a limited set of iOS devices and a basic UI.
Building comprehensive tests for Android involves writing code, using Espresso and UIAutomator, that acts as a sort of "remote control" for your app. Similarly, on iOS, testing is performed using XCTests. In both cases, Test Lab can run your tests against actual devices in a cloud-hosted device farm.
At the Firebase Summit in Prague at the end of October, the Test Lab team announced general availability of support for iOS, including ten models of iPhones and iPads running seven different versions of iOS, including iOS 12. We have also improved the iOS documentation and console experience for you.
Test Lab also launched a number of improvements to Robo, a tool which runs fully automated tests against your app running on Android devices. Here's what's new with Robo.
Games are difficult to crawl because they often have a highly customized UI, rather than using system widgets. This makes it difficult for Robo to crawl the game's experience. Now, if Robo detects that the app under test is actually a game, it will perform random taps and swipes in an effort to interact with the game's UI. This can yield useful crash and performance data and is an early but significant step towards more meaningful automated game crawling.
Test Lab now detects and warns if your APK makes use of internal Android APIs. On Android P and newer, using such APIs can crash your app. Whenever such an API is accessed during a Robo crawl a stack trace is recorded in the device logs. This pinpoints the location in your app's code where the violation occurs.
Test Lab now warns developers when it notices that Robo got stuck in a crawl. For example, if the user is presented with a complicated sign-up form or a login screen, it may be difficult for Robo to satisfy the requirements of the form. In situations like this, Robo will suggest an action to the developer to help it continue a full crawl, such as providing test credentials or writing a Robo Script.
If you aren't in the habit of regularly testing your app, consider giving Test Lab a try at no cost using the free daily quota of tests. No coding is necessary to run a Robo test on Android - just upload your APK to get started. And be sure to let us know what you think in the #test-lab channel of the Firebase Slack.
As the Product Manager of Firebase Remote Config, a product that helps you modify your app without deploying a new version, I spend a lot of time talking to our customers, and one of the most common requests I hear is, "Can I get rid of the Remote Config cache and fetch new values right away?"
While we can't get rid of the cache entirely -- caching ensures that the Remote Config service stays up and running, free of charge, no matter how many millions of users you have -- thanks to some new features we've added to Cloud Functions for Firebase, you can now ensure that your users always get fresh Remote Config values whenever they open your app!
Before we get into our solution, let me explain why the problem isn't as bad as you think, by looking at a typical app that has a Remote Config cache time set to 4 hours. You might think that, if this app developer publishes new Remote Config values, most of their users won't see these new values until several hours later.
But the fact is, the vast majority of their users will see the new Remote Config results as soon as they run the app! Remember, the way the cache works is that it looks at the last time Remote Config successfully fetched data. So anybody who last opened their app more than 4 hours ago will retrieve fresh data.
To put it another way, if your user opens up your app at 7:00 pm, you push out new values at midnight, and then they open up your app again at 12:15 am, Remote Config will fetch your new values from the server. Sure, it's only been 15 minutes since you published your values, but their cache is over 5 hours old, so Remote Config will fetch fresh values.
Nevertheless, I understand that it's really important for many of you to push out new changes to all of your users right away. Maybe you want feature flags to be disabled quickly. Or, maybe you have time-sensitive values, like marketing campaigns or a flash sale, in which case it would be awkward for your users to still see messaging for a sale that's no longer running.
Fortunately, we've built a new integration with Cloud Functions for Firebase that will make it easier for you to build custom behavior based on events that happen in Remote Config. Specifically, you can now use new triggers to write code that runs whenever your team publishes Remote Config values -- whether that's through the Firebase Console or the REST API.
There are a number of ways you can use these Remote Config triggers. One significant way is to use them to notify all of your apps as soon as a new set of values get published on Remote Config.
If you're interested in learning how to implement this feature, here's a high level overview of the process:
For more information, check out the solutions guide in our documentation. This guide contains all of the code samples you'll need, both in your Cloud Function, and on the client.
You might be wondering why you can't just a) set your Remote Config cache to 0 all the time, or b) have your clients go and always retrieve the new Remote Config values as soon as they receive the notification.
The answer is that Remote Config still needs to make sure the service is usable (and free) by all apps, and we've set up both client and server-side throttles to make sure that no single app accidentally abuses the service. By sticking with the default caching time when there's no local flag indicating that there's new config available, you can avoid any client-side throttles. Your app will have a faster start-up time, too, by avoiding unnecessary network calls.
By waiting until your client goes into the foreground to fetch these new values, you avoid any server-side throttles. You'll also avoid having your clients make unnecessary network calls in the background, which means your app uses less data overall, which makes your users happier, particularly in areas of the world where data usage is expensive.
By making sure your apps are still well-behaved, you can still get the freshest possible Remote Config values while avoiding any throttles that might be applied to your app. Please consult our caching & throttling documentation for further information on this topic.
One other advantage of using this system is that if your app in the foreground when it receives this notification, you can immediately act on this information - perhaps by fetching the new Remote Config data and prompting the user to refresh their screen. So those of you who were actively polling the Remote Config service while your app was in the foreground no longer need to resort to any of those network calls anymore.
Of course, this isn't the only use for Remote Config triggers and Cloud Functions - just one of the most requested, for sure! A number of developers also want to know when new Remote Config values have been pushed to the Firebase console, and you can use Cloud Functions to send off an email to your team or push a message in a Slack channel.
You could also use Cloud Functions to keep different projects in sync. For instance, you could use a cloud function to copy a set of Remote Config values from your production project to your development or testing project.
If you're interested in giving this new approach a try, we encourage you to read over the full documentation in the solutions guide. As always, if you have other questions about Remote Config or other feature requests, we're happy to hear your feedback. Please reach out to us in the Firebase Talk group, or on Stack Overflow.
If you're building or looking to build a visual app, you'll love ML Kit's new face contour detection. With ML Kit, you can take advantage of many common Machine Learning (ML) use-cases, such as detecting faces using computer vision. Need to know where to put a hat on a head in a photo? Want to place a pair of glasses over the eyes? Or maybe just a monocle over the left eye. It's all possible with ML Kit's face detection. In this post we'll cover the new face contour feature that allows you to build better visual apps on both Android or iOS.
With just a few configuration options you can now detect detailed contours of a face. Contours are a set of over 100 points that outline the face and common features such as the eyes, nose and mouth. You can see them in the image below. Note that as the subject raises his eyebrows, the contour dots move to match it. These points are how advanced camera apps set creative filters and artistic lenses over a user's face.
Setting up the face detector to detect these points only takes a few lines of code.
lazy var vision = Vision.vision() let options = VisionFaceDetectorOptions() options.contourMode = .all let faceDetector = vision.faceDetector(options: options)
The contour points can update in realtime as well. To achieve an ideal frame rate the face detector is configured with the fast mode by default.
fast
When you're ready to detect points in a face, send an image or a buffer to ML Kit for processing.
faceDetector.process(visionImage) { faces, error in guard error == nil, let faces = faces, !faces.isEmpty else { return } for face in faces { if let faceContour = face.contour(ofType: .face) { for point in faceContour.points { print(point.x) // the x coordinate print(point.y) // the y coordinate } } }
ML Kit will then give you an array of points that are the x and y coordinates of the contours in the same scale as the image.
The face detector can also detect landmarks within faces. A landmark is just an umbrella term for facial features like your nose, eyes, ears, and mouth. We've dramatically improved its performance since launching ML Kit at I/O!
To detect landmarks configure the face detector with the landmarkMode option:
landmarkMode
lazy var vision = Vision.vision() let options = VisionFaceDetectorOptions() options.landmarkMode = .all let faceDetector = vision.faceDetector(options: options)
Then pass an image into the detector to receive and process the coordinates of the detected landmarks.
faceDetector.process(visionImage) { faces, error in guard error == nil, let faces = faces, !faces.isEmpty else { return } for face in faces { // check for the presence of a left eye if let leftEye = face.landmark(ofType: .leftEye) { // TODO: put a monocle over the eye [monocle emoji] print(leftEye.position.x) // the x coordinate print(leftEye.position.y) // the y coordinate } } }
Hopefully these new features can empower you to easily build smarter features into your visual apps. Check out our docs for iOS or Android to learn all about face detection with ML Kit. Happy building!