Today we're excited to announce Firebase Hosting integration for Google Cloud’s new Cloud Run service. Cloud Run is a fully managed compute platform that enables developers to run stateless containers that are invocable via HTTP requests in a language and framework of their choosing. Firebase Hosting integration lets you use this architecture as a backend for a web app or microservice in your Firebase project.
Firebase Hosting is already a convenient and secure way to host sites and microservices. It can serve static pages you upload directly and, with the proper configuration in the firebase.json file, direct incoming requests to Cloud Functions for Firebase to serve dynamic content. This workflow is a one stop shop if you don’t mind working in the NodeJS environment. You can already build a fast site with dynamic content that automatically scales horizontally to meet user demand.
Not every developer wants to work with NodeJS though. Many already have large teams with existing knowledge in other languages and frameworks. Languages such as Go, Ruby, and Java have a huge presence in the server market but are currently absent in Firebase’s existing cloud backend solutions.
Leveraging the power of Google’s own experience building infrastructure for Kubernetes and the efforts of the Knative open source project, Google Cloud Platform now lets you deploy stateless servers. The only requirements are that you can generate a docker image able to interact to HTTP requests on the port specified in the $PORT environment variable for Kubernetes and that you respond within 60 seconds for Firebase Hosting. How does this tie into Firebase Hosting though?
If you’re new to Hosting, you may only be aware of static hosting or the free SSL certificates. To facilitate serving dynamic content, rewrites allow you to hit your cloud functions, which we’ve extended to support Cloud Run as well. With a few minor changes to your firebase.json file, you can now point a specific path to your container:
{ "hosting": { "public": "public", "rewrites": [ { "source": "/cloudrun", "run": { "serviceId": "my-awesome-api", // Optional (default is us-central1) "region": "us-central1", } } ] } }
or use wildcards to expose an entire API
{ "hosting": { "public": "public", "rewrites": [ { "source": "/api/**", "run": { "serviceId": "my-awesome-api", // Optional (default is us-central1) "region": "us-central1", } } ] } }
If you have a dynamic site that doesn’t update very frequently, take advantage of Firebase Hosting’s global CDN (content delivery network) to improve your site’s response time. For example, if you’re using ExpressJS and NodeJS, configure the caching behavior using the Cache-Control header like so:
Cache-Control
res.set('Cache-Control', 'public, max-age=300, s-maxage=600');
which caches the results of a request in the browser (max-age) for 5 minutes and in the CDN (s-maxage) for 10 minutes. With properly tuned cache settings, you can have a fast, flexible, dynamically rendered site that doesn’t need to run your server logic every time the user opens the page.
max-age
s-maxage
Unlike Cloud Functions for Firebase, when you use Cloud Run, you can build an image with any combination of languages and frameworks to handle these requests. Ruby developers can easily pull in Sinatra, you can fire up the Spring framework for Java teams, or check out server side Dart using Shelf to serve content. You don’t have to wait for any official language support -- if you can create a docker container, you can make and deploy backend code. Even if you’re working in high performance computing and your engineering team is trained up in Fortran, you can just leverage that existing knowledge to create a web dashboard with Fortran.io without having to wait for any official support from Google or Firebase.
Similar to Cloud Functions, Cloud Run automatically scales your containers horizontally to meet the demands of your users. There’s no need to manage clusters or node pools; you simply use the resources needed at the time to accomplish the task at hand. One tradeoff is that, also like Cloud Functions, Cloud Run images are stateless. However, unlike Cloud Functions, each container can handle up to 80 concurrent users, which can help reduce the frequency of cold starts.
Using Firebase Hosting with Cloud Run, we hope to empower you to build better web apps faster than ever before. Now your frontend and backend developers can truly use a single language, even share a code base. To get started right away, follow our step-by-step guide. Note that Cloud Run exists in the Google Cloud console rather than the Firebase console, but if you have a Firebase project then you already have a Google Cloud Platform project as well.
Hey there, Firebase developers!
Well, Cloud Next 2019 is upon us, and if you happen to be one of the several thousand people descending upon Moscone Center this year and want to get your fill of Firebase knowledge, you're in luck! There are a bunch of great sessions the Firebase team is putting on throughout the conference. And if you want to talk to any of us in person, swing on by the App Dev zone in the expo area. We'll be at the Firebase booth from now until Thursday the 11th.
But if you're not able to make it to beautiful downtown San Francisco this year, never fear! You can still find out everything that's new with Firebase in this blog post, so read on!
For those of you who are Google Cloud Platform customers, we are pleased to announce that the GCP support plan now includes support for Firebase products. This means that if you are using any of the paid GCP support packages, you can get the same high-quality support that you've come to expect from GCP for Firebase products as well. This includes target response times as quick as 15 minutes, technical account management (for enterprise customers), phone support, and much more.
Now if you're not a paying GCP customer, don't worry -- free community support isn't going anywhere. But for many of our larger customers who were interested in a more robust paid support experience, this new option is welcome news. To find out more, you can check out the support pages on the GCP site as well as the Firebase Support Guide.
One of the new GCP products that we announced at this year's Cloud Next is Cloud Run, a fully managed compute platform that lets you run stateless containers which you can invoke via HTTP requests. And we're happy to announce that you can use Cloud Run in conjunction with Firebase Hosting.
Why do you care? Because Firebase Hosting isn't just good for hosting static sites. You can run microservices on top of Hosting as well. In the past, you did this by connecting your Hosting site with Cloud Functions for Firebase, which meant that you had to write all of your code in Node.js. But now that you can deploy stateless servers through Cloud Run and have Hosting talk to them, you can build your microservices in anything from Python to Ruby to Swift.
This is a pretty deep topic which deserves its own blog post, so keep an eye out for that in the next couple of days. Or check out the documentation if you want to get started today.
In the past, you could filter your event reports in Google Analytics for Firebase by a single user property (or audience). So you could quickly answer questions like how many iOS 12 users were signing up for your newsletter. But up until now, you couldn't filter by more than one different user property at once. So if you wanted to find out how many iOS 12 users on iPad Pros were signing up for your newsletter, that wasn't really possible.
Well, we're happy to announce that you'll be able to filter your Analytics event reports by any number of different user properties or audiences -- both ones defined by Firebase as well as custom user properties -- at the same time. So if you want to find out how many iOS 12 users with iPad Pros who prefer dogs over cats signed up for your newsletter, that's now something you can see directly within the Firebase console.
This change is currently rolled out to a small number of users, and will be available to everybody over the next few weeks. This will apply automatically to all of your data going back to December of 2018 when it becomes available, so hop on over to the Firebase console and give it a try!
About 9 months ago ago, we gave developers the ability to create nicer looking domains for their Dynamic Links. So instead of having Dynamic Links with domains that looked like a8bc7w.app.goo.gl, you could set them to something much nicer, like example.page.link.
a8bc7w.app.goo.gl
example.page.link
We improved upon this feature to give you the ability to create dynamic links with any custom domain you own. So if you want to create a link with a domain like www.example.com, this is now something you can do with Dynamic Links.
www.example.com
The one caveat here is that your site needs to be hosted using Firebase Hosting. If migrating your primary domain over to Firebase Hosting isn't feasible, you can easily setup a subdomain of your site instead. For instance, maybe you can't move all of www.example.com to Firebase Hosting, but you could pretty easily set up links.example.com on Firebase Hosting, and use that for your Dynamic Links moving forward.
links.example.com
To find out more about custom domains in Dynamic Links and to get started, make sure to check out the documentation.
Of course, we're always rolling out new features and improvements to the Firebase platform, and with I/O happening just next month, maybe we'll have something more to talk about in May 😉. There's only one way to find out: Attend I/O in person, or keep reading the Firebase blog! (Okay, that's two ways. Counting was never a strong suit of mine.)
Today we are announcing the release of two new features to ML Kit: Language Identification and Smart Reply.
You might notice that both of these features are different from our existing APIs that were all focused on image/video processing. Our goal with ML Kit is to offer powerful but simple-to-use APIs to leverage the power of ML, independent of the domain. As such, we are excited to expand ML Kit with solutions for Natural Language Processing (NLP)!
NLP is a category of ML that deals with analyzing and generating text, speech, and other kinds of natural language data. We're excited to start out with two APIs: one that helps you identify the language of text, and one that generates reply suggestions in chat applications. Both of these features work fully on-device and are available on the latest version of the ML Kit SDK, on iOS (9.0 and higher) and Android (4.1 and higher).
Generate reply suggestions based on previous messages
A new feature popping up in messaging apps is to provide the user with a selection of suggested responses, either as actions on a notification or inside the app itself. This can really help a user to quickly respond when they are busy or a handy way to initiate a longer message.
With the new Smart Reply API you can now quickly achieve the same in your own apps. The API provides suggestions based on the last 10 messages in a conversation, although it still works if only one previous message is available. It is a stateless API that fully runs on-device, so we don't keep message history in memory nor send it to a server.
textPlus app providing response suggestions using Smart Reply
We have worked closely with partners like textPlus to ensure Smart Reply is ready for prime time and they have now implemented in-app response suggestions with the latest version of their app (screenshot above).
Adding Smart Reply to your own app is done with a simple function call (using Kotlin in this example):
val smartReply = FirebaseNaturalLanguage.getInstance().smartReply smartReply.suggestReplies(conversation) .addOnSuccessListener { result -> if (result.status == SmartReplySuggestionResult.STATUS_NOT_SUPPORTED_LANGUAGE) { // The conversation's language isn't supported, so the // the result doesn't contain any suggestions. } else if (result.status == SmartReplySuggestionResult.STATUS_SUCCESS) { // Task completed successfully // ... } } .addOnFailureListener { // Task failed with an exception // ... }
After you initialize a Smart Reply instance, call suggestReplies with a list of recent messages. The callback provides the result which contains a list of suggestions.
suggestReplies
result
For details on how to use the Smart Reply API, check out the documentation.
Tell me more ...
Although as a developer, you can just pick up this new API and easily get it integrated in your app, it may be interesting to reveal a bit on how it works under the hood. At the core of Smart Reply is a machine-learned model that is executed using TensorFlow Lite and has a state-of-the-art modern architecture based on SentencePiece text encoding[1] and Transformer[2].
However, as we realized when we started development of the API, the core suggestion model is not all that’s needed to provide a solution that developers can use in their apps. For example, we added a model to detect sensitive topics, so that we avoid making suggestions in response to profanity or in cases of personal tragedy/hardship. Also, we included language identification, to ensure we do not provide suggestions for languages the core model is not trained on. The Smart Reply feature is launching with English support first.
Identify the language of a piece of text
The language of a given text string is a subtle but helpful piece of information. A lot of apps have functionality with a dependency on the language: you can think of features like spell checking, text translation or Smart Reply. Rather than asking a user to specify the language they use, you can use our new Language Identification API.
ML Kit recognizes text in 110 different languages and typically only requires a few words to make an accurate determination. It is fast as well, typically providing a response within 1 to 2 ms across iOS and Android phones.
Similar to the Smart Reply API, you can identify the language with a function call (using Kotlin in this example):
val languageIdentification = FirebaseNaturalLanguage.getInstance().languageIdentification languageIdentification .identifyLanguage("¿Cómo estás?") .addOnSuccessListener { identifiedLanguage -> Log.i(TAG, "Identified language: $identifiedLanguage") } .addOnFailureListener { e -> Log.e(TAG, "Language identification error", e) }
The identifyLanguage functions takes a piece of a text and its callback provides a BCP-47 language code. If no language can be confidently recognized, ML Kit returns a code of und for undetermined. The Language Identification API can also provide a list of possible languages and their confidence values.
identifyLanguage
und
For details on how to use the Language Identification API, check out the documentation.
Get started today
We're really excited to expand ML Kit to include Natural Language APIs. Give the two new NLP APIs a spin today and let us know what you think! You can always reach us in our Firebase Talk Google Group.
As ML Kit grows we look forward to adding more APIs and categories that enables you to provide smarter experiences for your users. With that, please keep an eye out for some exciting ML Kit announcements at Google I/O.
When we think of scaling we usually imagine spiky charts of users hitting a database or processing computationally expensive queries. What we don't always think about it is deleting data. Handling large amounts of deletes is an important part of scaling a database. Imagine a system that's required to delete historical records at a specific deadline. If these records are hundreds of gigabytes in size, it will likely be difficult to delete them all without bogging the database down for the rest of its users. This exact scenario hasn't always been easy with the Firebase Realtime Database, but we're excited to say that it just got a lot easier.
Today, we're introducing a new way to efficiently perform large deletes!
If you want to delete a large node, the new recommended approach is to use the Firebase CLI (> v6.4.0). The CLI automatically detects a large node and performs a chunked delete efficiently.
$ firebase database:remove /path/to/delete
Keep in mind that in order to delete a large node, the Firebase CLI has to break it down into chunks. This means that clients can see a partial state where part of the data is missing. Writes in the path that is being deleted will still succeed, but the CLI tool will eventually delete all data at this path. This behavior is acceptable if no app depends on this node. However, if there are active listeners within the delete path, please make sure the listener can gracefully handle partial documents.
If you want consistency and fast deletion, consider using a special field, a.k.a a tombstone to mark this document as hidden, and then run a cloud function cron job to asynchronously purge the data. You can use Firebase Rules to disallow access to hidden documents.
We've also added a configuration option (defaultWriteSizeLimit) to the Realtime Database that allows you to specify a write size limit. This limit allows you to prevent operations (large deletes and writes) from being executed on your database if they exceed this limit.
defaultWriteSizeLimit
You can use this option to prevent app code from accidentally triggering a large operation, which would make your app unresponsive for a time. For more detail, please see our documentation about this option.
You can check and update the configuration via the CLI tool (version 6.4.0 and newer). There are four available thresholds. You can pick appropriate threshold based on your application requirement
small
write
medium
large
unlimited
Note: The target time is not a guaranteed cutoff off. The estimated time may be off from the actual write time.
$ firebase database:settings:set defaultWriteSizeLimit unlimited --instance <database-name> $ firebase database:settings:get defaultWriteSizeLimit --instance <database-name>
For REST requests, you can override defaultWriteSizeLimit with the writeSizeLimit query parameter. In addition, REST queries support a special writeSizeLimit value:
writeSizeLimit
tiny
firebase database:remove
For example:
$ curl -X PUT \ "https://<database-name>.firebaseio.com/path.json?writeSizeLimit=medium"
The default defaultWriteSizeLimit for new databases is large. In order to avoid affecting existing apps, the setting will remain at unlimited for existing projects for now.
We do want to extend this protection to everyone. So this summer (June~August, 2019), will set defaultWriteSizeLimit to large for existing databases that have not configured defaultWriteSizeLimit. To avoid disruption, we will exclude any databases that have triggered at least one large delete in the past three months.
defaultWriteSizeLimit.
These controls can help you keep your apps responsive and your users happy. We suggest setting defaultWriteSizeLimit for your existing apps today.
Let us know what you think of this new feature! Leave a message in our Google group.
In Firebase Crashlytics, you can view crashes and non-fatals by versions. Several of our customers take advantage of this filtering, especially to focus on their latest releases.
But sometimes too many versions can be a bad thing. Firebase Crashlytics - by default - shows you the last 100 versions we have seen. If you have a lot of debug versions created by developers, and by continuous Integration and deployment pipeline, you might soon start to not see the versions that really matter e.g., production builds.
So how do you make sure that this does not happen? Disabling Crash reporting initialization for debug builds is the simplest way of achieving this. Let's explore how to do this on iOS and Android.
For iOS apps, first check if you are manually initializing Crashlytics (this happens for Fabric apps that were linked to a Firebase app).
If you use Swift, search for the line Fabric.with([Crashlytics.self]) in AppDelegate.swift. If this line is present, then you are manually initializing Crashlytics, otherwise you are using automatic initialization.
Fabric.with([Crashlytics.self])
If you use ObjectiveC, search for the line [Fabric with:@[[Crashlytics class]]]; in AppDelegate.m. If this line is present, then you are manually initializing Crashlytics, otherwise you are using automatic initialization.
[Fabric with:@[[Crashlytics class]]];
For apps that are using manual initialization, you can just not initialize Crashlytics for DEBUG versions.
For Swift
#if !DEBUG Fabric.with([Crashlytics.self]) #endif
For ObjectiveC
#if !DEBUG [Fabric with:@[[Crashlytics class]]]; #endif
Firebase Crashlytics apps are automatically initialized by Firebase. You can turn off automatic collection with a new key to your Info.plist file:
Info.plist
firebase_crashlytics_collection_enabled
no
Then you can initialize it as shown in the examples above for Swift and ObjectiveC
For Android apps, first check if you are manually initializing Crashlytics (this happens for Fabric apps that were linked to a Firebase app). Search for Fabric.with in your project. If this line is present, then you are manually initializing Crashlytics. Otherwise, Crashlytics is being automatically initialized through Firebase.
Fabric.with
To disable Crashlytics in debug builds, you can make use of the BuildConfig.DEBUG flag. Edit the Fabric.with statement you found previously, adding a check for the DEBUG flag:.
BuildConfig.DEBUG
if (!BuildConfig.DEBUG) { Fabric.with(this, new Crashlytics()); }
Turn off Crashlytics initialization in your debug builds by creating a debug folder in your src directory and creating an AndroidManifest.xml with this snippet:
debug
src
<manifest xmlns:android="http://schemas.android.com/apk/res/android"> <application> <meta-data android:name="firebase_crashlytics_collection_enabled" android:value="no" /> </application> </manifest>
This snippet will be merged into the manifest of all of your debug variants, and will disable Crashlytics in those builds. If you'd prefer finer-grained control, you can use this approach for any variants you'd like to exclude.