If you've seen any of my recent Firebase talks, you know I'm a huge fan of TypeScript. At this year's Firebase Dev Summit in Amsterdam, I recommended TypeScript to improve the quality of your Cloud Functions. Today, we're making it easier to use TypeScript when writing Cloud Functions.
TypeScript is an extension of JavaScript to help build apps more quickly and correctly. It helps us build apps quickly by giving us early access to the newest JavaScript features like await and async. TypeScript also adds optional static typing to your code. Static typing lets IDEs give better code complete, linters catch more complex bugs, and compilers catch all syntax errors. Many developers have expressed interest in using TypeScript with Cloud Functions. Now starting with 3.16.0, the Firebase CLI gives first-class support to TypeScript. Get the latest version of the CLI with the command:
await
async
npm install -g firebase-tools
The new version of the Firebase CLI will ask you to pick a language when you create a new Firebase project with firebase init and choose to use Cloud Functions. If you choose TypeScript, it will set up a TypeScript-ready project structure and compiler options for Cloud Functions.
firebase init
Because Cloud Functions runs JavaScript, you need to "transpile" your TypeScript into JavaScript before it can run on Node.js. The Firebase CLI understands this, and all TypeScript projects you initialize with the new CLI are automatically compiled as part of every code deployment.
When you initialize your TypeScript project, the Firebase CLI recommends you use TSLint. We combed through every rule in TSLint to pick a set of safe defaults. We try not to enforce our coding style, but we will prevent deploys if we're fairly certain your code has a bug. This includes the most common error when writing Cloud Functions: forgetting to return a promise!
TSLint can detect warnings and errors. Warnings are shown during deploys and errors will block deploys to protect you from breaking production. If you're absolutely sure that your code is bug-free, you can disable the linter on specific lines with rule flags:
/* tslint:disable:<rule> */ myCode(); /* tslint:enable:<rule> */
Or you can disable the rule globally by removing the rule from tslint.json.
The Firebase CLI is able to automatically transpile and lint your TypeScript code thanks to another new Firebase CLI feature: lifecycle hooks. These hooks let you add code that should run automatically before. The first two hooks, "predeploy" and "postdeploy", run before and after a feature is deployed. These hooks work with all Firebase features (Cloud Functions, Hosting, Storage, Database, etc). With these hooks you can:
To add a lifecycle hook, add either "predeploy" or "postdeploy" as a subkey in that feature's stanza of firebase.json. For example, this is the predeploy hook that compiles typescript before deploying:
{ "functions": { "predeploy": "npm --prefix functions run build" } }
The following postdeploy hook tags the current git commit as production (warning: this assumes you don't have a branch named 'production-functions').
{ "functions": { "postdeploy":""git tag -f production-functions && git push -f origin production-functions" } }
We've extended our Cloud Functions docs with information about TypeScript.
Let us know how TypeScript affects your development process by tweeting @Firebase. Has it helped catch bugs early? What linter rules did we miss? What are your favorite lifecycle hooks?
A long while back, David East wrote a handy blog post about using the Firebase CLI to read and write your Firebase Realtime Database. The CLI has evolved a lot since then, so I'd like to share some of what's changed (and new!).
When I first started working with Realtime Database, I'd spend a fair amount of time in the Firebase console manually entering some data to work with. It's kinda fun to make changes there, then see them immediately in my app! But I soon discovered that it's kind of repetitive and time consuming to test like that. Instead, I could write a program to make the changes for me, but that wasn't a very flexible option. For easy reading and writing of data in my database, I found that the Firebase CLI is the best option for me. So, I'll share some of what it does here and how it can come in handy. All my examples will be using the Bash shell - you may have to modify them for other shells.
The Firebase CLI requires you to set aside a project directory, log in, and select a project that you want to work with, so be sure to follow the instructions to get set up with your existing project.
To write data from the command line use the firebase database:set command:
firebase database:set
firebase database:set /import data.json
The first argument to database:set is the path within the database to be written (here, /import), and the second is the JSON file to read from. If you don't have a file, and would rather provide the JSON on the command line, you can do this also with the --data flag:
/import
--data
firebase database:set /import --data '{"foo": "bar baz"}'
Notice that the JSON is quoted for the command line with single quotes. Otherwise, the space between the colon and "bar" would fool your shell into thinking that there are two arguments there. You can't use double quotes to quote this JSON string either, because JSON uses those quotes for its own strings. Escaping JSON for a unix command line can be tricky, so be careful about that! (For further thought: what if there was a single quote in one of the JSON strings?)
Also, you can pipe or redirect JSON to stdin. So, if you have a program that generates some JSON to add to your database, you can do it like this:
echo '{"foo": "bar baz"}' | firebase database:set /import --confirm
Notice that the --confirm flag is passed here to prevent the command from asking if you're OK potentially overwriting data. Piping to stdin won't work without it!
--confirm
The database:set command is great for initially populating your database with a setup script. If you run automated integration tests, the CLI is a handy way of scripting the initialization of your test environment.
database:set
It's also super handy for quickly triggering Cloud Functions database triggers, so you don't have to type in stuff at the command prompt every time you have something complicated to test.
Reading data from your database with the Firebase CLI is similarly easy. Here's how you fetch all the data under /messages as a JSON blob:
/messages
firebase database:get /messages
To save the output to a file, you can use a shell redirect, or the --output flag:
firebase database:get /messages > messages.json firebase database:get /messages --output messages.json
You'll notice the JSON output is optimized for space, which makes it hard to read. For something a little easier on the eyes, you can have the output "pretty-printed" for readability:
firebase database:get /messages --pretty
You can also sort and limit data just like the Firebase client APIs.
firebase database:get /messages --order-by-value date
To see all the options for reading and sorting, be sure to see the CLI help (all Firebase CLI commands share their usage like this):
firebase database:get --help
You've probably used the Realtime Database push function to add data to a node in your database. You can do the same with the CLI:
firebase database:push /messages --data '{"name":"Doug","text":"I heart Firebase"}'
This will create a unique push id under /messages and add the data under it. (Did you know that push IDs recently switched from starting with "-K" to "-L"?)
If you want to update some values at a location without overwriting that entire location, use database:update:
database:update
firebase database:update /users/-L-7Zl_CiHW62YWLO5I7 --data '{"name":"CodingDoug"}'
For those times when you need to remove something completely, there is database:remove. This command will blow away your entire database, unconditionally, kinda like rm -rf /. Be careful with this one:
database:remove
rm -rf /
firebase database:remove / --confirm
Sometimes you might want to simply copy the contents of your database from one project to another (for example, your development environment to staging). This is really easy by piping the stdout of database:get to the stdin of database:set:
database:get
firebase --project myproject-dev database:get / | \ firebase --project myproject-staging database:set / --confirm
Note here the use of --project to specify which Firebase project is to be used for reading and writing. This is your project's unique id in the Firebase console.
--project
If you find yourself repeating a set of commands, it's probably time to make a bash function. Save your function to your .bash_profile and you'll be able to access them from anywhere in your shell command line.
Do you often copy data between databases? The function below makes it easy:
function transfer_to() { local src_db="${1}" local dest_db="${2}" local path="${3:-/}" firebase database:get --project "$src_db" "$path" | firebase --project "$dest_db" database:set "$path" --confirm }
To use the function just call the transfer_to command (as if it were any other command) with the names of the project to copy to and from:
transfer_to myproject-dev myproject-staging
The command line is one of the most versatile tools in an engineer's toolbox. What are you doing with the CLI? We'd love to hear from you, so please shout out to us on Twitter. If you have any technical questions, post those on Stack Overflow with the firebase tag. And for bug reports and feature requests, use this form.
Scaling your Realtime Database just got a lot easier. We're excited to announce multi-database support in your Firebase Projects!
The Realtime Database is capable of handling a lot of traffic, however it does have its limitations. Scaling past these limits requires "sharding" your data across multiple databases to handle the load. Traditionally, you would need to create another project to get another database. If you need to scale again, you would need to repeat this process.
While this is possible, it's not exactly easy. It's not fun to manage data across multiple projects and authentication is different per project. This generally requires you use Custom Authentication in Firebase Auth, which can be a lot more work. We're happy to say that multi-database solves these problems.
Multi-database allows you to create multiple database instances in a single project. This eliminates managing data across multiple projects. To get started you need to be in the Blaze plan. In the data viewer you can click the triple dot icon to create new database instances:
To access data from a secondary instance you use an absolute URL when creating a database instance.
const app = firebase.initializeApp({ // use your main config databaseUrl: "https://multi-db.firebaseio.com/" }); // This is the default DB. const db1 = app.database(); // Reference the second DB instance. // Keep in mind that you need to upgrade to the latest release before this will work! const db2 = app.database("https://multi-db501c7.firebaseio.com/");
Since these databases are in the same project they share the same Authentication session. Which means no custom server is required.
Important note on SDK versions: Keep in mind that if you have an existing app as of the time of this blog posting, you'll need to upgrade to the latest SDK versions before this will work. (For node admin, you need at least 5.5.0, and inside Cloud Functions you will need at least 0.7.3).
How you shard your database depends on your application. However, there are many useful techniques.
The Master Shard
You can create a "master shard" that contains the mappings to where the data is located in other database shards. This allows you to only request heavier sets of data when needed.
{ "chatrooms": { "general": "room-db-general", "randomchat": "room-db-randomchat", "sweetgifs": "room-db-sweetgifs" } }
Bucketing
You can bucket data per database. This means you can have a users database, a messages database, and a receipts database.
Per customer
If you're developing a multi-tenant service another option is to create a database per customer. This approach ensures that your customers' databases are isolated, in case of load or an outage, it doesn't affect all of your customers.
Each database instance has its own set of security rules. Sharded databases can handle different structures which means you can apply different rules based on that database's purpose. You can manage and test each database's rule set in the console. It's important to note that databases are completely independent. This means you cannot access another database's data in rules evaluation.
We're also excited to announce that Cloud Functions for Firebase supports multi-database as well! You can specify which database you wish to trigger events from:
const functions = require('firebase-functions'); const firebase = require('firebase-admin'); firebase.initializeApp(functions.config().firebase); exports.copymsg = functions.database.instance('room-db-randomchat').ref('/messages').onWrite(event => { const { data } = event; const notificationDb = firebase.database(<your-full-db-url>); return notificationDb.ref(`notifications/${data.key}`).set(data.val()); });
Note that the absolute URL is not required for functions. If no database instance is provided Cloud Functions uses the default database.
const functions = require('firebase-functions'); const firebase = require('firebase-admin'); firebase.initializeApp(functions.config().firebase); // triggers off your default database exports.sanitize = functions.database.ref('/messages').onWrite(event => { // sanitize message here });
Multi-database is an easier way to get to scale with the Realtime Database. We hope you like it, and if you want to learn more check out our documentation. We also have an official support channel, a Slack channel, and we monitor our StackOverflow tags. Don't hesitate to reach out to us!
Firebase Cloud Messaging (FCM) is a cross-platform messaging solution that reliably delivers messages at no cost. FCM sends over 400 billion messages per day. Today we are excited to announce the launch of a new RESTful API, the FCM HTTP v1 API, that makes it safer and easier to send messages to your cross-platform applications. All existing FCM clients can receive messages sent via the new FCM API -- it does not require any changes on the client side.
Security
The new FCM API uses the OAuth2 security model. In the event that an access token becomes public, it can only be used for an hour or so before it expires. Refresh tokens are not transmitted as often and are thus much less likely to be captured.
Sending messages to multiple platforms is possible with the legacy API. However, as you add functionality to messages, sending to multiple platforms becomes difficult. With the new FCM API, sending messages to multiple platforms is very easy.
You can still send simple messages to multiple platforms using the common top level fields. For example, you can send this message informing users about a sale:
{ "message": { "topic":"sale-watchers", "notification": { "title":"Check out this sale!", "body":"All items half off through Friday" } }
When you send a notification like this one to devices subscribed to a topic, you probably want them to be taken to the description of the item. On Android you would compose a message including a "click_action" field indicating the activity to open. On iOS, APNs relies on a "category" indicating the action to take upon clicking, including which view to show.
"click_action"
"category"
Before, since these keys were unique to their respective platforms, developers would have to create two separate messages. Now, we can use platform-specific fields together with common ones in a single message:
{ "message": { "topic":"sale-watchers", "notification": { "title":"Check out this sale!", "body":"All items half off through Friday" }, "android":{ "notification"{ "click_action":"OPEN_ACTIVITY_3" } }, "apns": { "payload": { "aps": { "category": "SALE_CATEGORY" } } } } }
Note: In this case web apps subscribed to the 'sale-watchers' topic will receive a notification message with the defined title and body.
The new FCM API fully supports messaging options available on iOS, Android and Web. Since each platform has its own defined block in the JSON payload, we can easily extend to other platforms as needed. If a future IoT messaging protocol requires a security_key field we could easily support an iot block within the FCM payload.
iot
{ "message": { "topic":"sale-watchers", "notification": { "title":"Check out this sale!", "body":"All items half off through Friday" }, "android":{ "notification"{ "click_action":"OPEN_ACTIVITY_3" } }, "apns": { "payload": { "aps": { "category": "SALE_CATEGORY" } } } "iot": { "security_key": "SECURITY_KEY" } } }
The new FCM API is the more secure, cross platform, future proof way of sending messages to FCM clients. If you are currently using the FCM legacy API, or if you are interested in using FCM to send messages to your apps, give the new FCM API a try. See the FCM guides and reference docs for more.
About FCM
Authorize requests
Build message requests
Migrate from GCM to FCM on Android
Migrate from GCM to FCM on iOS
When the folks at Fabric joined Firebase earlier this year, we aligned around a common mission: provide developers like you with a platform that solves common problems across the app development lifecycle, so you can focus on building an awesome user experience.
One area — among many — where Fabric excelled was in its console and dashboards. We've been hard at work for the last several months, working with them to bring together the best parts of Fabric and Firebase. Today, we're excited to share some improvements to the Firebase console.
We started by redesigning the navigation, to more accurately reflect the way your team works. We've clustered Firebase products into four groups: Develop, Stability, Analytics, and Grow. All of the products that you're used to seeing in the Firebase console are still there; we've simply reorganized them in a way that makes it simpler to navigate between them as you use more products across the Firebase platform.
We've also redesigned the Project Home screen in the Firebase console to bring a few important metrics front and center. Now, when you first open a project in Firebase, you'll see four key metrics: 30-day crash-free user rate, 30-day crashes, daily active users, and monthly active users, along with graphs that displays these trends over time. In research, we found that the vast majority of the time, developers are looking for one of those four metrics, so we made them easily accessible from the Project Home landing page.
Another well-loved Fabric console feature is the Latest Release section. This dashboard gives you all the most important insights from your most recent app release, so you can get a quick snapshot of what's going well and what might need to be rolled back. We've brought this section into the redesigned Firebase Console; you'll find it under the Analytics section of the navigation bar.
Starting today, you'll also see an Analytics dashboard that is organized into simple cards underneath an easy to understand question. Organizing data around jargon like "DAUs" or "retention cohorts" is difficult to navigate, so we've restructured the dashboard around questions you have about your app, like "where are my users engaged?" Or "how well do I retain users?" Our research confirmed that 90% of users preferred this design and we hope you find it helpful too!
Another thing we've learned from our friends at Fabric and heard from all of you is that having information in realtime is critical. Whether you're tracking a new app release or monitoring the status of a bug fix, you need to understand what's happening in your app in realtime, so you can make changes and prioritize work accordingly.
To help with this, we've added realtime data on crashes and active users to a card that you'll see in the new Analytics dashboard, as well as the Latest Release section of the console. This is just the first step and, over time, we plan to make realtime data more prevalent across the rest of the Firebase console.
As always, you can contact us at https://firebase.google.com/support/ with feedback and questions. Happy building!