It's December, folks, and you know what that means: holiday cheer!
The Firebase Test Lab team is fully invested in making this season the greatest of seasons. Remember Halloween? What a hoot! Also, this happened:
(Note: these are actual Test Lab engineers, in costume, actually beating each other up with foam sticks at a Halloween party. Both get lumps of coal this year.)
We're getting ready for the holidays! So, sit back, pour yourself some eggnog, and read about what's new for your Android testing enjoyment.
Many of you are using Robo to automatically test your apps in Test Lab. Since you don't have to write any code to make it work, it's the gift that keeps on giving. Even better, you can have it fill in specific form fields and push buttons with some easy configuration.
We've found that some apps require more of a nudge to navigate into the parts that need the most testing. (Hey, even Santa needs help from a few elves!) Now, with Robo scripts, you can record a series of actions to take in your app, and play that back before Robo takes over. It works a lot like Espresso Test Recorder, except the output is a JSON file that you upload along with your APK when running a Robo test. With these extra instructions, you can guide your app past introductions or login screens.
Of course, your best bet to maximize the test coverage is writing Espresso tests that drive your app. I heard that it's easier than driving a team of reindeer!
Do you use the screenshots in Test Lab results to check if your app displays correctly? It's a great way to see if you app renders "naughty or nice" on many different types of screens. But if you test with lots of devices in a single test matrix, it can be kind of a pain to sort through all the results to compare the same screen among multiple devices. Now, Test Lab will cluster them together in your test results, so you can see all the different screen sizes, densities, and orientations from your test in a single place.
The Test Lab team is always busy at the North Pole (located at a data center in Atlanta) bringing you new devices to test with. The latest additions are the Sony Xperia XZ Premium, the Moto G4 Play, and the Huawei P8lite, delivered straight to your digital stocking. However, sometimes old toys break and need to be recycled. At the Test Lab workshop, we call that "device deprecation", which means we take old devices out of commission as they become unreliable. To see a (twice-checked) list of devices that are currently available, in addition to those being deprecated, click through to this page. Once a device is marked as "deprecated", it'll remain available for a month, then removed.
Deprecated devices look like this in the Firebase console:
And like this in the gcloud command line (note the "deprecated" tag in red):
You better not pout, you better not cry ‐ these devices served longer than their expected lifetime!
Or, just join us on the Firebase Slack in the #test-lab channel. We're all friendly there, so be good, for goodness sake!
If you've seen any of my recent Firebase talks, you know I'm a huge fan of TypeScript. At this year's Firebase Dev Summit in Amsterdam, I recommended TypeScript to improve the quality of your Cloud Functions. Today, we're making it easier to use TypeScript when writing Cloud Functions.
TypeScript is an extension of JavaScript to help build apps more quickly and correctly. It helps us build apps quickly by giving us early access to the newest JavaScript features like await and async. TypeScript also adds optional static typing to your code. Static typing lets IDEs give better code complete, linters catch more complex bugs, and compilers catch all syntax errors. Many developers have expressed interest in using TypeScript with Cloud Functions. Now starting with 3.16.0, the Firebase CLI gives first-class support to TypeScript. Get the latest version of the CLI with the command:
await
async
npm install -g firebase-tools
The new version of the Firebase CLI will ask you to pick a language when you create a new Firebase project with firebase init and choose to use Cloud Functions. If you choose TypeScript, it will set up a TypeScript-ready project structure and compiler options for Cloud Functions.
firebase init
Because Cloud Functions runs JavaScript, you need to "transpile" your TypeScript into JavaScript before it can run on Node.js. The Firebase CLI understands this, and all TypeScript projects you initialize with the new CLI are automatically compiled as part of every code deployment.
When you initialize your TypeScript project, the Firebase CLI recommends you use TSLint. We combed through every rule in TSLint to pick a set of safe defaults. We try not to enforce our coding style, but we will prevent deploys if we're fairly certain your code has a bug. This includes the most common error when writing Cloud Functions: forgetting to return a promise!
TSLint can detect warnings and errors. Warnings are shown during deploys and errors will block deploys to protect you from breaking production. If you're absolutely sure that your code is bug-free, you can disable the linter on specific lines with rule flags:
/* tslint:disable:<rule> */ myCode(); /* tslint:enable:<rule> */
Or you can disable the rule globally by removing the rule from tslint.json.
The Firebase CLI is able to automatically transpile and lint your TypeScript code thanks to another new Firebase CLI feature: lifecycle hooks. These hooks let you add code that should run automatically before. The first two hooks, "predeploy" and "postdeploy", run before and after a feature is deployed. These hooks work with all Firebase features (Cloud Functions, Hosting, Storage, Database, etc). With these hooks you can:
To add a lifecycle hook, add either "predeploy" or "postdeploy" as a subkey in that feature's stanza of firebase.json. For example, this is the predeploy hook that compiles typescript before deploying:
{ "functions": { "predeploy": "npm --prefix functions run build" } }
The following postdeploy hook tags the current git commit as production (warning: this assumes you don't have a branch named 'production-functions').
{ "functions": { "postdeploy":""git tag -f production-functions && git push -f origin production-functions" } }
We've extended our Cloud Functions docs with information about TypeScript.
Let us know how TypeScript affects your development process by tweeting @Firebase. Has it helped catch bugs early? What linter rules did we miss? What are your favorite lifecycle hooks?
A long while back, David East wrote a handy blog post about using the Firebase CLI to read and write your Firebase Realtime Database. The CLI has evolved a lot since then, so I'd like to share some of what's changed (and new!).
When I first started working with Realtime Database, I'd spend a fair amount of time in the Firebase console manually entering some data to work with. It's kinda fun to make changes there, then see them immediately in my app! But I soon discovered that it's kind of repetitive and time consuming to test like that. Instead, I could write a program to make the changes for me, but that wasn't a very flexible option. For easy reading and writing of data in my database, I found that the Firebase CLI is the best option for me. So, I'll share some of what it does here and how it can come in handy. All my examples will be using the Bash shell - you may have to modify them for other shells.
The Firebase CLI requires you to set aside a project directory, log in, and select a project that you want to work with, so be sure to follow the instructions to get set up with your existing project.
To write data from the command line use the firebase database:set command:
firebase database:set
firebase database:set /import data.json
The first argument to database:set is the path within the database to be written (here, /import), and the second is the JSON file to read from. If you don't have a file, and would rather provide the JSON on the command line, you can do this also with the --data flag:
/import
--data
firebase database:set /import --data '{"foo": "bar baz"}'
Notice that the JSON is quoted for the command line with single quotes. Otherwise, the space between the colon and "bar" would fool your shell into thinking that there are two arguments there. You can't use double quotes to quote this JSON string either, because JSON uses those quotes for its own strings. Escaping JSON for a unix command line can be tricky, so be careful about that! (For further thought: what if there was a single quote in one of the JSON strings?)
Also, you can pipe or redirect JSON to stdin. So, if you have a program that generates some JSON to add to your database, you can do it like this:
echo '{"foo": "bar baz"}' | firebase database:set /import --confirm
Notice that the --confirm flag is passed here to prevent the command from asking if you're OK potentially overwriting data. Piping to stdin won't work without it!
--confirm
The database:set command is great for initially populating your database with a setup script. If you run automated integration tests, the CLI is a handy way of scripting the initialization of your test environment.
database:set
It's also super handy for quickly triggering Cloud Functions database triggers, so you don't have to type in stuff at the command prompt every time you have something complicated to test.
Reading data from your database with the Firebase CLI is similarly easy. Here's how you fetch all the data under /messages as a JSON blob:
/messages
firebase database:get /messages
To save the output to a file, you can use a shell redirect, or the --output flag:
firebase database:get /messages > messages.json firebase database:get /messages --output messages.json
You'll notice the JSON output is optimized for space, which makes it hard to read. For something a little easier on the eyes, you can have the output "pretty-printed" for readability:
firebase database:get /messages --pretty
You can also sort and limit data just like the Firebase client APIs.
firebase database:get /messages --order-by-value date
To see all the options for reading and sorting, be sure to see the CLI help (all Firebase CLI commands share their usage like this):
firebase database:get --help
You've probably used the Realtime Database push function to add data to a node in your database. You can do the same with the CLI:
firebase database:push /messages --data '{"name":"Doug","text":"I heart Firebase"}'
This will create a unique push id under /messages and add the data under it. (Did you know that push IDs recently switched from starting with "-K" to "-L"?)
If you want to update some values at a location without overwriting that entire location, use database:update:
database:update
firebase database:update /users/-L-7Zl_CiHW62YWLO5I7 --data '{"name":"CodingDoug"}'
For those times when you need to remove something completely, there is database:remove. This command will blow away your entire database, unconditionally, kinda like rm -rf /. Be careful with this one:
database:remove
rm -rf /
firebase database:remove / --confirm
Sometimes you might want to simply copy the contents of your database from one project to another (for example, your development environment to staging). This is really easy by piping the stdout of database:get to the stdin of database:set:
database:get
firebase --project myproject-dev database:get / | \ firebase --project myproject-staging database:set / --confirm
Note here the use of --project to specify which Firebase project is to be used for reading and writing. This is your project's unique id in the Firebase console.
--project
If you find yourself repeating a set of commands, it's probably time to make a bash function. Save your function to your .bash_profile and you'll be able to access them from anywhere in your shell command line.
Do you often copy data between databases? The function below makes it easy:
function transfer_to() { local src_db="${1}" local dest_db="${2}" local path="${3:-/}" firebase database:get --project "$src_db" "$path" | firebase --project "$dest_db" database:set "$path" --confirm }
To use the function just call the transfer_to command (as if it were any other command) with the names of the project to copy to and from:
transfer_to myproject-dev myproject-staging
The command line is one of the most versatile tools in an engineer's toolbox. What are you doing with the CLI? We'd love to hear from you, so please shout out to us on Twitter. If you have any technical questions, post those on Stack Overflow with the firebase tag. And for bug reports and feature requests, use this form.
Scaling your Realtime Database just got a lot easier. We're excited to announce multi-database support in your Firebase Projects!
The Realtime Database is capable of handling a lot of traffic, however it does have its limitations. Scaling past these limits requires "sharding" your data across multiple databases to handle the load. Traditionally, you would need to create another project to get another database. If you need to scale again, you would need to repeat this process.
While this is possible, it's not exactly easy. It's not fun to manage data across multiple projects and authentication is different per project. This generally requires you use Custom Authentication in Firebase Auth, which can be a lot more work. We're happy to say that multi-database solves these problems.
Multi-database allows you to create multiple database instances in a single project. This eliminates managing data across multiple projects. To get started you need to be in the Blaze plan. In the data viewer you can click the triple dot icon to create new database instances:
To access data from a secondary instance you use an absolute URL when creating a database instance.
const app = firebase.initializeApp({ // use your main config databaseUrl: "https://multi-db.firebaseio.com/" }); // This is the default DB. const db1 = app.database(); // Reference the second DB instance. // Keep in mind that you need to upgrade to the latest release before this will work! const db2 = app.database("https://multi-db501c7.firebaseio.com/");
Since these databases are in the same project they share the same Authentication session. Which means no custom server is required.
Important note on SDK versions: Keep in mind that if you have an existing app as of the time of this blog posting, you'll need to upgrade to the latest SDK versions before this will work. (For node admin, you need at least 5.5.0, and inside Cloud Functions you will need at least 0.7.3).
How you shard your database depends on your application. However, there are many useful techniques.
The Master Shard
You can create a "master shard" that contains the mappings to where the data is located in other database shards. This allows you to only request heavier sets of data when needed.
{ "chatrooms": { "general": "room-db-general", "randomchat": "room-db-randomchat", "sweetgifs": "room-db-sweetgifs" } }
Bucketing
You can bucket data per database. This means you can have a users database, a messages database, and a receipts database.
Per customer
If you're developing a multi-tenant service another option is to create a database per customer. This approach ensures that your customers' databases are isolated, in case of load or an outage, it doesn't affect all of your customers.
Each database instance has its own set of security rules. Sharded databases can handle different structures which means you can apply different rules based on that database's purpose. You can manage and test each database's rule set in the console. It's important to note that databases are completely independent. This means you cannot access another database's data in rules evaluation.
We're also excited to announce that Cloud Functions for Firebase supports multi-database as well! You can specify which database you wish to trigger events from:
const functions = require('firebase-functions'); const firebase = require('firebase-admin'); firebase.initializeApp(functions.config().firebase); exports.copymsg = functions.database.instance('room-db-randomchat').ref('/messages').onWrite(event => { const { data } = event; const notificationDb = firebase.database(<your-full-db-url>); return notificationDb.ref(`notifications/${data.key}`).set(data.val()); });
Note that the absolute URL is not required for functions. If no database instance is provided Cloud Functions uses the default database.
const functions = require('firebase-functions'); const firebase = require('firebase-admin'); firebase.initializeApp(functions.config().firebase); // triggers off your default database exports.sanitize = functions.database.ref('/messages').onWrite(event => { // sanitize message here });
Multi-database is an easier way to get to scale with the Realtime Database. We hope you like it, and if you want to learn more check out our documentation. We also have an official support channel, a Slack channel, and we monitor our StackOverflow tags. Don't hesitate to reach out to us!
Firebase Cloud Messaging (FCM) is a cross-platform messaging solution that reliably delivers messages at no cost. FCM sends over 400 billion messages per day. Today we are excited to announce the launch of a new RESTful API, the FCM HTTP v1 API, that makes it safer and easier to send messages to your cross-platform applications. All existing FCM clients can receive messages sent via the new FCM API -- it does not require any changes on the client side.
Security
The new FCM API uses the OAuth2 security model. In the event that an access token becomes public, it can only be used for an hour or so before it expires. Refresh tokens are not transmitted as often and are thus much less likely to be captured.
Sending messages to multiple platforms is possible with the legacy API. However, as you add functionality to messages, sending to multiple platforms becomes difficult. With the new FCM API, sending messages to multiple platforms is very easy.
You can still send simple messages to multiple platforms using the common top level fields. For example, you can send this message informing users about a sale:
{ "message": { "topic":"sale-watchers", "notification": { "title":"Check out this sale!", "body":"All items half off through Friday" } }
When you send a notification like this one to devices subscribed to a topic, you probably want them to be taken to the description of the item. On Android you would compose a message including a "click_action" field indicating the activity to open. On iOS, APNs relies on a "category" indicating the action to take upon clicking, including which view to show.
"click_action"
"category"
Before, since these keys were unique to their respective platforms, developers would have to create two separate messages. Now, we can use platform-specific fields together with common ones in a single message:
{ "message": { "topic":"sale-watchers", "notification": { "title":"Check out this sale!", "body":"All items half off through Friday" }, "android":{ "notification"{ "click_action":"OPEN_ACTIVITY_3" } }, "apns": { "payload": { "aps": { "category": "SALE_CATEGORY" } } } } }
Note: In this case web apps subscribed to the 'sale-watchers' topic will receive a notification message with the defined title and body.
The new FCM API fully supports messaging options available on iOS, Android and Web. Since each platform has its own defined block in the JSON payload, we can easily extend to other platforms as needed. If a future IoT messaging protocol requires a security_key field we could easily support an iot block within the FCM payload.
iot
{ "message": { "topic":"sale-watchers", "notification": { "title":"Check out this sale!", "body":"All items half off through Friday" }, "android":{ "notification"{ "click_action":"OPEN_ACTIVITY_3" } }, "apns": { "payload": { "aps": { "category": "SALE_CATEGORY" } } } "iot": { "security_key": "SECURITY_KEY" } } }
The new FCM API is the more secure, cross platform, future proof way of sending messages to FCM clients. If you are currently using the FCM legacy API, or if you are interested in using FCM to send messages to your apps, give the new FCM API a try. See the FCM guides and reference docs for more.
About FCM
Authorize requests
Build message requests
Migrate from GCM to FCM on Android
Migrate from GCM to FCM on iOS