This article is part of the weekly learning pathways we’re releasing leading up to Firebase Summit. See the full pathway and register for the summit here.
Earlier this year at Google I/O, we announced Firebase App Check, Firebase's new platform for protecting your Firebase APIs from abuse. Not only can App Check protect hosted APIs (such as Cloud Storage for Firebase, Firebase Realtime Database, and others), it can also be used to protect your own backend resources, whether they are run in a managed environment such as Cloud Run or hosted on your own infrastructure.
To prevent abuse, your public APIs should verify that the calling application is authorized to make requests, regardless of whether a user credential is present or not. Imagine you run a backend which provides the API for a free mobile app; your app might be funded with ads, so you should ensure that all requests originate from your app—and not someone else's app!
To protect your backend with App Check, your apps should send an App Check token with every request. Apps built with Firebase SDKs and with App Check functionalities properly configured will automatically obtain and refresh App Check tokens for you. They will also automatically send those tokens along with every request to supported Firebase services such as Cloud Storage for Firebase, Cloud Functions for Firebase, and Firebase Realtime Database. These services will also automatically verify those tokens for you.
On the other hand, if you run your services on your own infrastructure, you are responsible for making sure that:
In Node.js backends running in trusted environments, such as Cloud Run, Cloud Functions, or your own server, it is common practice to use middleware modules to integrate cross-cutting concerns like this. Here's a code snippet that defines an Express.js middleware layer that verifies the App Check token using the Firebase Admin SDK:
const express = require('express'); const firebaseAdmin = require('firebase-admin'); const app = express(); firebaseAdmin.initializeApp(); const appCheckVerification = async (req, res, next) => { const appCheckClaims = await verifyAppCheckToken(req.header('X-Firebase-AppCheck')); if (!appCheckClaims) { res.status(401); return next('Unauthorized'); } next(); }; const verifyAppCheckToken = async (appCheckToken) => { if (!appCheckToken) { return null; } try { return firebaseAdmin.appCheck().verifyToken(appCheckToken); } catch (err) { return null; } }; app.get('/yourApiEndpoint', [appCheckVerification], (req, res) => { // Handle request. });
For more details, check out our documentation.
App Check tokens are implemented as JSON Web Tokens (JWT) as specified by RFC 7519. This means they are signed JSON objects. To assert that an App Check token is legitimate, you must perform the following steps:
https://firebaseappcheck.googleapis.com/v1beta/jwks
RS256
JWT
The following example performs the necessary steps in Ruby using the jwt gem as a Rack middleware layer. Many languages have similar JSON Object Signing and Encryption (JOSE) libraries that you can use for this purpose.
require 'json' require 'jwt' require 'net/http' require 'uri' class AppCheckVerification def initialize(app, options = {}) @app = app @project_number = options[:project_number] end def call(env) app_id = verify(env['HTTP_X_FIREBASE_APPCHECK']) return [401, { 'Content-Type' => 'text/plain' }, ['Unauthenticated']] unless app_id env['firebase.app'] = app_id @app.call(env) end def verify(token) return unless token # 1. Obtain the Firebase App Check Public Keys # Note: It is not recommended to hard code these keys as they rotate, # but you should cache them for up to 6 hours. uri = URI('https://firebaseappcheck.googleapis.com/v1beta/jwks') jwks = JSON(Net::HTTP.get(uri)) # 2. Verify the signature on the App Check token payload, header = JWT.decode(token, nil, true, jwks: jwks, algorithms: 'RS256') # 3. Ensure the token's header uses the algorithm RS256 return unless header['alg'] == 'RS256' # 4. Ensure the token's header has type JWT return unless header['typ'] == 'JWT' # 5. Ensure the token is issued by App Check return unless payload['iss'] == "https://firebaseappcheck.googleapis.com/#{@project_number}" # 6. Ensure the token is not expired return unless payload['exp'] > Time.new.to_i # 7. Ensure the token's audience matches your project return unless payload['aud'].include? "projects/#{@project_number}" # 8. The token's subject will be the app ID, you may optionally filter against # an allow list payload['sub'] rescue end end class Application def call(env) [200, { 'Content-Type' => 'text/plain' }, ["Hello app #{env['firebase.app']}"]] end end use AppCheckVerification, project_number: 1234567890 run Application.new
If your application uses content delivery networks (CDNs) to cache content closer to your users, you can use App Check to filter out abusive traffic at the edge. Since the Firebase Admin SDK's App Check functionalities are currently only available in Node.js and not all CDN providers support the Node.js runtime, you may need to verify App Check tokens in another runtime supported by the CDN. For this use case, you can adapt the following example for CloudFlare workers:
import { JWK, JWS } from "node-jose"; // Specify your project number to ensure only your apps make requests to your CDN const PROJECT_NUMBER = 1234567890; addEventListener("fetch", event => { event.respondWith(handleRequest(event.request)) }); async function handleRequest(request) { const appCheckToken = request.headers.get('X-Firebase-AppCheck'); const appId = await verifyAppCheckToken(appCheckToken); if (!appId) { return new Response("Unauthorized", { status: 401 }); } return new Response(`Hello app ${appId}`, { headers: { "content-type": "text/plain" } }); } async function verifyAppCheckToken(encodedToken) { if (!encodedToken) { return null; } // 1. Obtain the Firebase App Check Public Keys // Note: It is not recommended to hard code these keys as they rotate, // but you should cache them for up to 6 hours. const jwks = await fetch("https://firebaseappcheck.googleapis.com/v1beta/jwks", { headers: { "content-type": "application/json;charset=UTF-8", } }); // 2. Verify the signature on the App Check token const keystore = await JWK.asKeyStore(await jwks.json()); const token = await JWS.createVerify(keystore).verify(encodedToken); // 3. Ensure the token's header uses the algorithm RS256 if (token.header["alg"] !== "RS256") { return null; } // 4. Ensure the token's header has type JWT if (token.header["typ"] !== "JWT") { return null; } const payload = JSON.parse(token.payload.toString()); // 5. Ensure the token is issued by App Check if (payload["iss"] !== `https://firebaseappcheck.googleapis.com/${PROJECT_NUMBER}`) { return null; } // 6. Ensure the token is not expired if (Date.now() > payload["exp"] * 1000) { return null; } // 7. Ensure the token's audience matches your project if (!payload["aud"].includes(`projects/${PROJECT_NUMBER}`)) { return null; } // 8. The token's subject will be the app ID, you may optionally filter against // an allow list return payload["sub"]; }
Apigee is Google Cloud's comprehensive API management platform for your APIs. In Apigee, you can easily implement a policy for your API Proxy that checks for the presence and validity of Firebase App Check tokens for all your incoming requests.
In the following example, we will check for the presence of the Firebase App Check token in the request header X-Firebase-AppCheck, ensure that it is valid, and verify that it was issued by the correct project.
X-Firebase-AppCheck
First, in your API Proxy, add a Verify JWT policy; you can enter any Display Name.
Similar to the examples we have seen so far, you will need to perform all of the following steps in this policy:
request.headers.X-Firebase-AppCheck
<Source>
<Algorithm>
<Audience>
projects/{project_number}
{project_number}
<Issuer>
https://firebaseappcheck.googleapis.com/{project_number}
<Subject>
Following these steps, your configuration should look like the following:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <VerifyJWT continueOnError="false" enabled="true" name="Firebase-App-Check-Token-Verification"> <DisplayName>Firebase App Check Token Verification</DisplayName> <Algorithm>RS256</Algorithm> <Source>request.headers.X-Firebase-AppCheck</Source> <PublicKey> <JWKS uri="https://firebaseappcheck.googleapis.com/v1beta/jwks"/> </PublicKey> <!-- Be sure to use your real project number in <Issuer> and <Audience>. --> <Issuer>https://firebaseappcheck.googleapis.com/123456789</Issuer> <Audience>projects/123456789</Audience> <!-- You can also optionally check that the Subject matches your app's App Id. --> <Subject><!-- Insert your app's App ID here. --></Subject> </VerifyJWT>
Finally, add this policy to your Proxy Endpoint's pre-flow, and save this configuration as a new revision. Once you re-deploy the proxy at this revision, any request that arrives at the proxy must have a valid Firebase App Check token in the X-Firebase-AppCheck header, or the request will be rejected.
Securing your app and your resources is critical. Using Firebase Authentication and Firebase Security Rules helps protect access to user data, and using Firebase App Check helps mitigate fraud and secure access to your backend resources—whether those are Firebase resources or your own. View the full learning pathway on protecting your app from abuse for additional resources.
And don’t forget to register for Firebase Summit and join us on November 10th to learn how Firebase can help you accelerate your app development, release with confidence and scale with ease!
The journey of building and releasing an app can sometimes feel like scaling a mountain – both can be filled with hurdles and require a lot of hard work. At Firebase, our goal is to make this journey easier by providing you with tools and resources you need to build and grow apps users love.
That is why every year, we bring the community together at Firebase Summit to share exciting new product updates, answer your burning questions, and provide hands-on training so you can get the most out of our platform. This year won’t be any different, and we are excited to announce that Firebase Summit will be returning as a virtual event on November 10th, 2021 at 9:30am PST. We also have a few exciting activities leading up to Firebase Summit, so read on for what to expect.
Hands-on Learning Experiences
To help you get ready for Firebase Summit, and deepen your knowledge about our products and services, we’ll release a new set of learning pathways every week leading up to the main event. These learning pathways will consist of easy-to-follow codelabs, articles, interactive videos, and quizzes. After completing a pathway, you’ll also have the opportunity to earn a shiny new Google developer badge. The first weekly pathway will launch on October 13th, so check out the event website for more details.
Community Talks
One of the best ways to learn about Firebase is from other developers. On November 3rd, we’ll host community talks where you will get the opportunity to hear from Google Developer Experts on topics like how to build enterprise scale apps with Firebase, implement authorization models, and so much more. Mark your calendars and visit the event website where we’ll highlight these sessions.
Live Keynote, On-demand Technical Sessions, and More
The pathways and community talks lead up to Firebase Summit on November 10th, which will kick off with a jam-packed keynote at 9:30 am PST. Join us to learn how Firebase can help you accelerate app development, run your app with confidence, and scale your business with ease. After the keynote, you can ask questions in the chat and have them answered live by the Firebase team during #AskFirebase. We will also have new on-demand technical sessions on our latest announcements, and demos that you can view at your convenience.
We’ll be sharing more details about Firebase Summit 2021 in the coming weeks so stay tuned. In the meantime, register for the event, subscribe to the Firebase YouTube channel, and follow us on Twitter to join the conversation. #FirebaseSummit
As an app developer, you probably send hundreds of app notifications to users nudging them to try a new game level, complete a purchase, read a new article, or so on. But have you ever wondered what impact those notifications are having on user behavior? Are they actually improving the metrics you care about?
Firebase Cloud Messaging enables you to deliver and receive messages and notifications on Android, iOS, and the web at no cost. Measuring the impact of these notifications is an important but somewhat difficult task to accomplish. Cloud Messaging has always used Google Analytics to count sent notifications, and now, there’s a new way to see other user events related to an open notification across sessions or a longer period of time. Using these events will not only provide more information about the notifications you’re sending to users, they will also enable you to better measure the impact of sending notifications to users.
With analytics labels in Cloud Messaging, you have the ability to pick and apply analytics labels to messages, which Google Analytics can use to track all events related to your notifications beyond just counting sent ones. What is an analytics label? The label is a text string that you can add to any message. For example, let’s say you wanted to see all of the opened notifications for users who signed up in January. You can now do that by attaching a “january” label to your notifications when sending.
You can attach a label for any notification sent via the Firebase Cloud Messaging API as well as for a messaging campaign in Firebase Console. When you set up a Cloud Messaging campaign in the Notifications composer, you can use the dropdown menu in step 4 for Conversion events to choose an analytics label.
This will attach an analytics label to all messages sent as part of this campaign.
After some time has passed, you can get insight into the impact of those messages. Cloud Messaging in Firebase Console has a Reports tab with a graph showing funnel analysis - the number of sends, received, impressions, and opened notifications for a given timeframe. Filtering using an analytics label shows the information for just that label.
Aside from being able to see the impact of labels in Reports, you can use them to perform other specific Google Analytics analyses. For example, perhaps you want to compare how your game play time during winter holidays performed against summertime; you can use comparisons to evaluate metrics like engagement time for those two different messaging campaigns. You can also use the cohort exploration feature to compare user activity, and in fact, you can use the labels as event parameters on the messages for any Google Analytics analyses!
Viewing User Engagement card with label comparisons applied.
Cohort analysis using labeled notification campaign segments.
So there we have it! Using analytics labels in Cloud Messaging, we are able to leverage the full power of Google Analytics to measure the impact of messages sent via the API and Firebase Console!
We’ve heard your feedback about the challenges you’ve experienced when uploading dSYMs to Crashlytics, especially for apps with bitcode enabled. We’re working to improve this process and wanted to share a way to automate it by using fastlane.
When building apps without bitcode, the Crashlytics upload-symbols tool that you include as a run script automatically uploads your app’s dSYM files every time you rebuild your app. When the app sends us crash reports, these files allow us to symbolicate the stack frames included in the reports so that you see source code rather than compiled code on the Crashlytics dashboard.
upload-symbols
However, for apps with bitcode enabled, the process of uploading dSYMs isn’t automatic. Instead, you need to download updated dSYMs from Apple and upload them manually to Crashlytics each time you submit a new version of the app to App Store Connect.
This manual process can be a pain especially since non-technical team members are often responsible for uploading bitcode dSYMs.With fastlane, you can use a single command to automatically download and upload your bitcode dSYMs.
Fastlane is an open-source tool that automates the build and release workflow for iOS and Android apps, including tasks like code signing and generating screenshots. See the official fastlane documentation for more information.
There are a couple of fastlane tools that will help you automate the process of uploading dSYMs to Crashlytics: actions, lanes, and Fastfiles.
actions
lanes
Fastfiles
An action is one of the building blocks of fastlane. Each action performs a single command, such as running tests or performing code validation.
action
One or more actions can be included in a lane. You can think of a lane as a workflow that is (usually) composed of related tasks. For this Crashlytics workflow, the lane will consist of three actions: downloading dSYMs from Apple, uploading dSYMs to Crashlytics, then cleaning up the files. Each lane also has a name and description.
lane
And finally, a Fastfile manages the lane created in this workflow, as well as any others you use for your project.
Fastfile
You can set up fastlane in multiple ways, including with Bundler and Homebrew. For instructions, see fastlane's Getting Started for iOS guide. During the fastlane init step, make sure to choose the manual setup option.
fastlane init
Once fastlane is set up, the first step toward automating symbol uploads is configuring a lane. We’ll do this by editing the Fastfile that was created in the fastlane directory during the setup step. Let's start by modifying the default lane, which currently looks like this:
fastlane
desc "Description of what the lane does" lane :custom_lane do # add actions here: https://docs.fastlane.tools/actions end
For the desc field, we’ll include a brief summary of the lane’s purpose:
desc
desc "Downloads dSYM files from Apple for a given app version and uploads them to Crashlytics"
And then give the lane a name:
lane :upload_symbols do
Next, we'll add actions to our lane. There are three actions that we'll use: download_dsyms, upload_symbols_to_crashlytics, and clean_build_artifacts. (For fastlane’s documentation on any of these commands, run fastlane action command_name in your terminal.)
download_dsyms
upload_symbols_to_crashlytics
clean_build_artifacts
fastlane action command_name
download_dsyms will do the work of downloading the new dSYM files from Apple. We’ll just need to provide the action with either 1) the version number and build for the app version that you want to download dSYMs for or 2) the live and latest version constants to download the live or latest version’s dSYMs, respectively. Optionally, you can also specify your app’s bundle ID and your App Store Connect username to avoid having to enter them manually when you run the action. All in all, this action will look similar to one of the following:
live
latest
download_dsyms(version: "1.0.0", build: "1.0.0", app_identifier: "bundleID")
download_dsyms(version: "live")
download_dsyms(username: "username", version: "latest", app_identifier: "bundleID")
In this guide, we'll use the last option (i.e, latest version, specifying both bundle ID and App Store Connect username).
upload_symbols_to_crashlytics will take the files from download_dsyms and upload them to Crashlytics. This command takes as a parameter the GoogleService-Info.plist file for the app, like so: upload_symbols_to_crashlytics(gsp_path: "path/to/GoogleService-Info.plist")
GoogleService-Info.plist
upload_symbols_to_crashlytics(gsp_path: "path/to/GoogleService-Info.plist")
Note: If you’re using Swift Package Manager (instead of CocoaPods), you’ll need to download the Crashlytics upload-symbols tool, place it in your project directory, make it executable (chmod +x upload-symbols), then set the binary_path variable in the upload_symbols_to_crashlytics action: upload_symbols_to_crashlytics(gsp_path: "path/to/GoogleService-Info.plist", binary_path: "upload-symbols").
chmod +x upload-symbols
binary_path
upload_symbols_to_crashlytics(gsp_path: "path/to/GoogleService-Info.plist", binary_path: "upload-symbols")
Lastly, we’ll use clean_build_artifacts to delete the dSYM files once they’ve been uploaded.
Our lane now looks something like this:
desc "Downloads dSYM files from Apple for a given app build and version and uploads them to Crashlytics" lane :upload_symbols do download_dsyms(username: "username", version: "latest", app_identifier: "bundleID") upload_symbols_to_crashlytics(gsp_path: "path/to/GoogleService-Info.plist") clean_build_artifacts end
(If you’re using Swift Package Manager, the only difference will be the binary_path variable in the upload_symbols_to_crashlytics action.)
And here's our Fastfile with just this one lane:
default_platform(:ios) platform :ios do desc "Downloads dSYM files from Apple for a given app version and uploads them to Crashlytics" lane :upload_symbols do download_dsyms(username: "username", version: "latest", app_identifier: "bundleID") upload_symbols_to_crashlytics(gsp_path: "path/to/GoogleService-Info.plist") clean_build_artifacts end end
To run this lane, run fastlane upload_symbols from the terminal inside your project folder (replace upload_symbols if you chose a different name for the lane). You’ll be asked to log into your App Store Connect account (and select your App Store Connect team, if you have more than one associated with your account). If needed, you can customize the actions in the file to include more information about your account to avoid having to log in manually.
fastlane upload_symbols
upload_symbols
Get started with fastlane and start automating your dSYM uploads!
We’re always looking for ways to make Crashlytics work better for developers. We want to thank everyone who has provided us with feedback on GitHub, Firebase Support, and other channels about the dSYM upload process, and we encourage you to continue to reach out with questions and suggestions!
The Cloud Firestore C++ SDK uses the firebase::firestore::FieldValue class to represent document fields. A FieldValue is a union type that may contain a primitive (like a boolean or a double), a container (e.g. an array), some simple structures (such as a Timestamp) or some Firestore-specific sentinel values (e.g. ServerTimestamp) and is used to write data to and read data from a Firestore database.
firebase::firestore::FieldValue
FieldValue
Timestamp
ServerTimestamp
Other Firebase C++ SDKs use firebase::Variant for similar purposes. A Variant is also a union type that may contain primitives or containers of nested Variants; it is used, for example, to write data to and read data from the Realtime Database, to represent values read from Remote Config, and to represent the results of calling Cloud Functions using the Firebase SDK. If your application is migrating from Realtime Database to Firestore (or uses both side-by-side), or, for example, uses Firestore to store the results of Cloud Functions, you might need to convert between Variants and FieldValues.
firebase::Variant
Variant
In many ways, FieldValue and Variant are similar. However, it is important to understand that for all their similarities, neither is a subset of the other; rather, they can be seen as overlapping sets, each reflecting its domain. These differences make it impossible to write a general-purpose converter between them that would cover each and every case -- instead, handling each instance where one type doesn’t readily map to the other would by necessity have to be application-specific.
With that in mind, let’s go through some sample code that provides one approach to conversion; if your application needs to convert between FieldValue and Variant, it should be possible to adapt this code to your needs. Full sample code is available here.
The one area where FieldValues and Variants correspond to each other exactly are the primitive values. Both FieldValue and Variant support the exact same set of primitives, and conversion between them is straightforward:
FieldValue ConvertVariantToFieldValue(const Variant& from) { switch (from.type()) { case Variant::Type::kTypeNull: return FieldValue::Null(); case Variant::Type::kTypeBool: return FieldValue::Boolean(from.bool_value()); case Variant::Type::kTypeInt64: return FieldValue::Integer(from.int64_value()); case Variant::Type::kTypeDouble: return FieldValue::Double(from.double_value()); } } Variant Convert(const FieldValue& from) { switch (from.type()) { case FieldValue::Type::kNull: return Variant::Null(); case FieldValue::Type::kBoolean: return Variant(from.boolean_value()); case FieldValue::Type::kInteger: return Variant(from.integer_value()); case FieldValue::Type::kDouble: return Variant(from.double_value()); } }
Variant distinguishes between mutable and static strings and blobs: a mutable string (or blob) is owned by the Variant and can be modified through its interface, whereas a static string (or blob) is not owned by the Variant (so the application needs to ensure it stays valid as long as the Variant’s lifetime hasn’t ended; typically, this is only used for static strings) and cannot be modified.
Firestore does not have this distinction -- the strings and blobs held by FieldValue are always immutable (like static strings or blobs in Variant) but owned by the FieldValue (like mutable strings or blobs in Variant). Because ownership is the more important concern here, Firestore strings and blobs should be converted to mutable strings and blobs in Variant:
// `FieldValue` -> `Variant` case FieldValue::Type::kString: return Variant(from.string_value()); case FieldValue::Type::kBlob: return Variant::FromMutableBlob(from.blob_value(), from.blob_size()); // `Variant` -> `FieldValue` case Variant::Type::kTypeStaticString: case Variant::Type::kTypeMutableString: return FieldValue::String(from.string_value()); case Variant::Type::kTypeStaticBlob: case Variant::Type::kTypeMutableBlob: return FieldValue::Blob(from.blob_data(), from.blob_size());
Both FieldValues and Variants support arrays (called “vectors” by Variant) and maps, so for the most part, converting between them is straightforward:
// `FieldValue` -> `Variant` case FieldValue::Type::kArray: return ConvertArray(from.array_value()); case FieldValue::Type::kMap: return ConvertMap(from.map_value()); } // `Variant` -> `FieldValue` case Variant::Type::kTypeVector: return ConvertArray(from.vector()); case Variant::Type::kTypeMap: return ConvertMap(from.map()); } // ... FieldValue ConvertArray(const std::vector<Variant>& from) { std::vector<FieldValue> result; result.reserve(from.size()); for (const auto& v : from) { result.push_back(Convert(v)); } return FieldValue::Array(std::move(result)); } FieldValue ConvertMap(const std::map<Variant, Variant>& from) { MapFieldValue result; for (const auto& kv : from) { // Note: Firestore only supports string keys. If it's possible // for the map to contain non-string keys, you would have to // convert them to a string representation or skip them. assert(kv.first.is_string()); result[kv.first.string_value()] = Convert(kv.second); } return FieldValue::Map(std::move(result)); } Variant ConvertArray(const std::vector<FieldValue>& from) { std::vector<Variant> result; result.reserve(from.size()); for (const auto& v : from) { result.push_back(Convert(v)); } return Variant(result); } Variant ConvertMap(const MapFieldValue& from) { std::map<Variant, Variant> result; for (const auto& kv : from) { result[Variant(kv.first)] = Convert(kv.second); } return Variant(result); }
Firestore does not support nested arrays (that is, one array being a direct member of another array). FieldValue itself would not reject a nested array, though -- it will only be rejected by Firestore’s input validation when passed to a Firestore instance (like in a call to DocumentReference::Set).
DocumentReference::Set
The approach to handling this case would have to be application-specific. For example, you might simply omit nested arrays, perhaps logging a warning upon encountering them; on the other extreme, you may want to terminate the application:
FieldValue ConvertArray(const std::vector<Variant>& from) { std::vector<FieldValue> result; result.reserve(from.size()); for (const auto& v : from) { if (v.type() == Variant::Type::kTypeVector) { // Potential approach 1: log and forget LogWarning("Skipping nested array"); continue; // Potential approach 2: terminate assert(false && "Encountered a nested array"); std::terminate(); } result.push_back(Convert(v)); } return FieldValue::Array(std::move(result)); }
Yet another approach might be to leave the nested arrays in place and rely on Firestore input validation to reject them (this approach is mostly applicable if you don’t expect your data to contain any nested arrays).
One possible workaround if you need to pass a nested array to Firestore might be to represent arrays as maps:
case Variant::Type::kTypeVector: { MapFieldValue result; const std::vector<Variant>& array = from.vector(); for (int i = 0; i != array.size(); ++i) { result[std::to_string(i)] = Convert(array[i]); } return FieldValue::Map(std::move(result)); }
Another approach, which has the nice property of being generalizable to other cases, is to automatically translate the structure of “array-array” into “array-map-array” when converting to FieldValue.
If you decide to use this approach, you will need to ensure that the translated structure roundtrips properly (assuming your application needs bidirectional conversion). That is, an “array-map-array” structure within a FieldValue converts back to an “array-array” structure in Variant. To achieve this, the artificial map would have to be somehow marked to indicate that it does not represent an actual value in the database.
Once again, the implementation for this would be application-specific. You could add a boolean field called “special” with its value set to true and establish a convention that a map that contains a “special” field never represents user input. If this is not true for your application, you might use a more distinct name than “special” or come up with a different convention altogether.
true
These next two examples use “special” as a marker, but please keep in mind that it’s just one possible approach:
// `Variant` -> `FieldValue` FieldValue Convert(const Variant& from, bool within_array = false) { switch (from.type()) { // ... case Variant::Type::kTypeVector: if (!within_array) { return ConvertArray(from.vector()); } else { // Firestore doesn't support nested arrays. As a workaround, create an // intermediate map to contain the nested array. return FieldValue::Map({ {"special", FieldValue::Boolean(true)}, {"type", FieldValue::String("nested_array")}, {"value", ConvertArray(from.vector())}, }); } } } FieldValue ConvertArray(const std::vector<Variant>& from) { std::vector<FieldValue> result; result.reserve(from.size()); for (const auto& v : from) { result.push_back(Convert(v, /*within_array=*/true)); } return FieldValue::Array(std::move(result)); } // `FieldValue` -> `Variant` Variant Convert(const FieldValue& from) { switch (from.type()) { // ... case FieldValue::Type::kArray: return ConvertArray(from.array_value()); case FieldValue::Type::kMap: { const auto& m = from.map_value(); // Firestore doesn't support nested arrays, so nested arrays are instead // encoded as an "array-map-array" structure. Make sure nested arrays // round-trip. // Note: `TryGet*` functions are helpers to simplify getting values // out of maps. See their definitions in the full sample code. bool is_special = TryGetBoolean(m, "special"); if (is_special) { return ConvertSpecialValue(m); } else { return ConvertMap(from.map_value()); } } } Variant ConvertSpecialValue(const MapFieldValue& from) { // Note: in production code, you would have to handle // the case where the value is not in the map. // Note: `TryGet*` functions are helpers to simplify getting values // out of maps. See their definitions in the full sample code. std::string type = TryGetString(from, "type"); if (type == "nested_array") { // Unnest the array. return ConvertArray(TryGetArray(from, "value")); } // ... }
Finally, there are several kinds of entities supported by FieldValue that have no direct equivalent in Variant:
GeoPoint
DocumentReference
Similarly to nested arrays, your application could omit these values, issue errors upon encountering them, or else convert them into some representation supported by Variant. The exact representation would depend on the needs of your application and on whether the conversion is bidirectional or not (that is, whether it should be possible to convert the representation back into the original Firestore type).
An approach that is general (if somewhat heavyweight) and allows bidirectional conversion is to convert such structs into “special” maps. It could look like this:
// `FieldValue` -> `Variant` case FieldValue::Type::kTimestamp: { Timestamp ts = from.timestamp_value(); MapFieldValue as_map = { {"special", FieldValue::Boolean(true)}, {"type", FieldValue::String("timestamp")}, {"seconds", FieldValue::Integer(ts.seconds())}, {"nanoseconds", FieldValue::Integer(ts.nanoseconds())}}; return ConvertMap(as_map); } case FieldValue::Type::kGeoPoint: { GeoPoint gp = from.geo_point_value(); MapFieldValue as_map = { {"special", FieldValue::Boolean(true)}, {"type", FieldValue::String("geo_point")}, {"latitude", FieldValue::Double(gp.latitude())}, {"longitude", FieldValue::Double(gp.longitude())}}; return ConvertMap(as_map); } case FieldValue::Type::kReference: { DocumentReference ref = from.reference_value(); std::string path = ref.path(); MapFieldValue as_map = { {"special", FieldValue::Boolean(true)}, {"type", FieldValue::String("document_reference")}, {"document_path", FieldValue::String(path)}}; return ConvertMap(as_map); } FieldValue ConvertSpecialValue(const std::map<Variant, Variant>& from) { // Special values are Firestore entities encoded as maps because they are not // directly supported by `Variant`. The assumption is that the map contains // a boolean field "special" set to true and a string field "type" indicating // which kind of an entity it contains. std::string type = TryGetString(from, "type"); if (type == "timestamp") { Timestamp result(TryGetInteger(from, "seconds"), TryGetInteger(from, "nanoseconds")); return FieldValue::Timestamp(result); } else if (type == "geo_point") { GeoPoint result(TryGetDouble(from, "latitude"), TryGetDouble(from, "longitude")); return FieldValue::GeoPoint(result); } // ...
The only complication here is that to convert a “special” map back to a DocumentReference, you would need a pointer to a Firestore instance so that you may call Firestore::Document. If your application always uses the default Firestore instance, you might simply call Firestore::GetInstance. Otherwise, you can pass Firestore* as an argument to Convert or make Convert a member function of a class, say, Converter, that acquires a pointer to a Firestore instance in its constructor.
Firestore::Document
Firestore::GetInstance
Firestore*
Convert
Converter
} else if (type == "document_reference") { DocumentReference result = firestore->Document(TryGetString(from, "document_path")); return FieldValue::Reference(result); }
One more thing to note is that Realtime Database represents timestamps as the number of milliseconds since the epoch in UTC. If you intend to use the resulting Variant in the Realtime Database, a more natural representation for a Timestamp might thus be an integer field. However, you would have to provide some way to distinguish between numbers and timestamps in the Realtime Database -- a possible solution is to simply add a _timestamp suffix to the field name, but of course other alternatives are possible. In that case, the conversion from FieldValue to Variant might look like:
_timestamp
case FieldValue::Type::kTimestamp: { Timestamp ts = from.timestamp_value(); int64_t millis = ts.seconds() * 1000 + ts.nanoseconds() / (1000 * 1000); return Variant(millis); }
If bidirectional conversion is required, you would also have to somehow distinguish between numbers and timestamps when converting back to a FieldValue. If you're using the solution of adding _timestamp suffix to the field name, you would have to pass the field name to the converter. Another approach might be to use heuristics and presume that a very large number that readily converts to a reasonably recent date must be a timestamp.
Finally, there are some unique values in Firestore that represent a transformation to be applied to an existing value or a placeholder for a value to be supplied by the backend:
Firestore
Delete
ArrayUnion
ArrayRemove
IncrementInteger
IncrementDouble
Some of these values are only meaningful in Firestore, so most likely it wouldn’t make sense to try to convert them in your application. Otherwise, Delete and ServerTimestamp, being stateless, can be straightforwardly converted to maps using the approach outlined above. If you’re using Variant with the Realtime Database, you might want to represent a ServerTimestamp in the Realtime Database-specific format ( a map that contains a single element: {".sv" : "timestamp"}):
{".sv" : "timestamp"}
case FieldValue::Type::kServerTimestamp: return ConvertMap({{".sv", FieldValue::String("timestamp")}});
Similarly, you may represent Delete as a null in the Realtime Database:
case FieldValue::Type::kDelete: return Variant::Null();
However, other than Delete and ServerTimestamp, the rest of the sentinel values are stateful and there is no way to get their underlying value from a FieldValue, so lossless conversion is not possible. Likely the best thing to do is just to ensure these values are never passed to the converter and assert if they do.