• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1#!/usr/bin/env python
2#
3# Copyright (C) 2008 The Android Open Source Project
4#
5# Licensed under the Apache License, Version 2.0 (the "License");
6# you may not use this file except in compliance with the License.
7# You may obtain a copy of the License at
8#
9#      http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS,
13# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14# See the License for the specific language governing permissions and
15# limitations under the License.
16
17"""
18Given a target-files zipfile, produces an OTA package that installs that build.
19An incremental OTA is produced if -i is given, otherwise a full OTA is produced.
20
21Usage:  ota_from_target_files [options] input_target_files output_ota_package
22
23Common options that apply to both of non-A/B and A/B OTAs
24
25  --downgrade
26      Intentionally generate an incremental OTA that updates from a newer build
27      to an older one (e.g. downgrading from P preview back to O MR1).
28      "ota-downgrade=yes" will be set in the package metadata file. A data wipe
29      will always be enforced when using this flag, so "ota-wipe=yes" will also
30      be included in the metadata file. The update-binary in the source build
31      will be used in the OTA package, unless --binary flag is specified. Please
32      also check the comment for --override_timestamp below.
33
34  -i  (--incremental_from) <file>
35      Generate an incremental OTA using the given target-files zip as the
36      starting build.
37
38  -k  (--package_key) <key>
39      Key to use to sign the package (default is the value of
40      default_system_dev_certificate from the input target-files's
41      META/misc_info.txt, or "build/make/target/product/security/testkey" if
42      that value is not specified).
43
44      For incremental OTAs, the default value is based on the source
45      target-file, not the target build.
46
47  --override_timestamp
48      Intentionally generate an incremental OTA that updates from a newer build
49      to an older one (based on timestamp comparison), by setting the downgrade
50      flag in the package metadata. This differs from --downgrade flag, as we
51      don't enforce a data wipe with this flag. Because we know for sure this is
52      NOT an actual downgrade case, but two builds happen to be cut in a reverse
53      order (e.g. from two branches). A legit use case is that we cut a new
54      build C (after having A and B), but want to enfore an update path of A ->
55      C -> B. Specifying --downgrade may not help since that would enforce a
56      data wipe for C -> B update.
57
58      We used to set a fake timestamp in the package metadata for this flow. But
59      now we consolidate the two cases (i.e. an actual downgrade, or a downgrade
60      based on timestamp) with the same "ota-downgrade=yes" flag, with the
61      difference being whether "ota-wipe=yes" is set.
62
63  --wipe_user_data
64      Generate an OTA package that will wipe the user data partition when
65      installed.
66
67  --retrofit_dynamic_partitions
68      Generates an OTA package that updates a device to support dynamic
69      partitions (default False). This flag is implied when generating
70      an incremental OTA where the base build does not support dynamic
71      partitions but the target build does. For A/B, when this flag is set,
72      --skip_postinstall is implied.
73
74  --skip_compatibility_check
75      Skip checking compatibility of the input target files package.
76
77  --output_metadata_path
78      Write a copy of the metadata to a separate file. Therefore, users can
79      read the post build fingerprint without extracting the OTA package.
80
81  --force_non_ab
82      This flag can only be set on an A/B device that also supports non-A/B
83      updates. Implies --two_step.
84      If set, generate that non-A/B update package.
85      If not set, generates A/B package for A/B device and non-A/B package for
86      non-A/B device.
87
88  -o  (--oem_settings) <main_file[,additional_files...]>
89      Comma separated list of files used to specify the expected OEM-specific
90      properties on the OEM partition of the intended device. Multiple expected
91      values can be used by providing multiple files. Only the first dict will
92      be used to compute fingerprint, while the rest will be used to assert
93      OEM-specific properties.
94
95Non-A/B OTA specific options
96
97  -b  (--binary) <file>
98      Use the given binary as the update-binary in the output package, instead
99      of the binary in the build's target_files. Use for development only.
100
101  --block
102      Generate a block-based OTA for non-A/B device. We have deprecated the
103      support for file-based OTA since O. Block-based OTA will be used by
104      default for all non-A/B devices. Keeping this flag here to not break
105      existing callers.
106
107  -e  (--extra_script) <file>
108      Insert the contents of file at the end of the update script.
109
110  --full_bootloader
111      Similar to --full_radio. When generating an incremental OTA, always
112      include a full copy of bootloader image.
113
114  --full_radio
115      When generating an incremental OTA, always include a full copy of radio
116      image. This option is only meaningful when -i is specified, because a full
117      radio is always included in a full OTA if applicable.
118
119  --log_diff <file>
120      Generate a log file that shows the differences in the source and target
121      builds for an incremental package. This option is only meaningful when -i
122      is specified.
123
124  --oem_no_mount
125      For devices with OEM-specific properties but without an OEM partition, do
126      not mount the OEM partition in the updater-script. This should be very
127      rarely used, since it's expected to have a dedicated OEM partition for
128      OEM-specific properties. Only meaningful when -o is specified.
129
130  --stash_threshold <float>
131      Specify the threshold that will be used to compute the maximum allowed
132      stash size (defaults to 0.8).
133
134  -t  (--worker_threads) <int>
135      Specify the number of worker-threads that will be used when generating
136      patches for incremental updates (defaults to 3).
137
138  --verify
139      Verify the checksums of the updated system and vendor (if any) partitions.
140      Non-A/B incremental OTAs only.
141
142  -2  (--two_step)
143      Generate a 'two-step' OTA package, where recovery is updated first, so
144      that any changes made to the system partition are done using the new
145      recovery (new kernel, etc.).
146
147A/B OTA specific options
148
149  --disable_fec_computation
150      Disable the on device FEC data computation for incremental updates.
151
152  --include_secondary
153      Additionally include the payload for secondary slot images (default:
154      False). Only meaningful when generating A/B OTAs.
155
156      By default, an A/B OTA package doesn't contain the images for the
157      secondary slot (e.g. system_other.img). Specifying this flag allows
158      generating a separate payload that will install secondary slot images.
159
160      Such a package needs to be applied in a two-stage manner, with a reboot
161      in-between. During the first stage, the updater applies the primary
162      payload only. Upon finishing, it reboots the device into the newly updated
163      slot. It then continues to install the secondary payload to the inactive
164      slot, but without switching the active slot at the end (needs the matching
165      support in update_engine, i.e. SWITCH_SLOT_ON_REBOOT flag).
166
167      Due to the special install procedure, the secondary payload will be always
168      generated as a full payload.
169
170  --payload_signer <signer>
171      Specify the signer when signing the payload and metadata for A/B OTAs.
172      By default (i.e. without this flag), it calls 'openssl pkeyutl' to sign
173      with the package private key. If the private key cannot be accessed
174      directly, a payload signer that knows how to do that should be specified.
175      The signer will be supplied with "-inkey <path_to_key>",
176      "-in <input_file>" and "-out <output_file>" parameters.
177
178  --payload_signer_args <args>
179      Specify the arguments needed for payload signer.
180
181  --payload_signer_maximum_signature_size <signature_size>
182      The maximum signature size (in bytes) that would be generated by the given
183      payload signer. Only meaningful when custom payload signer is specified
184      via '--payload_signer'.
185      If the signer uses a RSA key, this should be the number of bytes to
186      represent the modulus. If it uses an EC key, this is the size of a
187      DER-encoded ECDSA signature.
188
189  --payload_signer_key_size <key_size>
190      Deprecated. Use the '--payload_signer_maximum_signature_size' instead.
191
192  --boot_variable_file <path>
193      A file that contains the possible values of ro.boot.* properties. It's
194      used to calculate the possible runtime fingerprints when some
195      ro.product.* properties are overridden by the 'import' statement.
196      The file expects one property per line, and each line has the following
197      format: 'prop_name=value1,value2'. e.g. 'ro.boot.product.sku=std,pro'
198
199  --skip_postinstall
200      Skip the postinstall hooks when generating an A/B OTA package (default:
201      False). Note that this discards ALL the hooks, including non-optional
202      ones. Should only be used if caller knows it's safe to do so (e.g. all the
203      postinstall work is to dexopt apps and a data wipe will happen immediately
204      after). Only meaningful when generating A/B OTAs.
205
206  --partial "<PARTITION> [<PARTITION>[...]]"
207      Generate partial updates, overriding ab_partitions list with the given
208      list.
209
210  --custom_image <custom_partition=custom_image>
211      Use the specified custom_image to update custom_partition when generating
212      an A/B OTA package. e.g. "--custom_image oem=oem.img --custom_image
213      cus=cus_test.img"
214
215  --disable_vabc
216      Disable Virtual A/B Compression, for builds that have compression enabled
217      by default.
218
219  --vabc_downgrade
220      Don't disable Virtual A/B Compression for downgrading OTAs.
221      For VABC downgrades, we must finish merging before doing data wipe, and
222      since data wipe is required for downgrading OTA, this might cause long
223      wait time in recovery.
224
225  --enable_vabc_xor
226      Enable the VABC xor feature. Will reduce space requirements for OTA
227
228  --force_minor_version
229      Override the update_engine minor version for delta generation.
230
231  --compressor_types
232      A colon ':' separated list of compressors. Allowed values are bz2 and brotli.
233
234  --enable_zucchini
235      Whether to enable to zucchini feature. Will generate smaller OTA but uses more memory.
236
237  --enable_lz4diff
238      Whether to enable lz4diff feature. Will generate smaller OTA for EROFS but
239      uses more memory.
240
241  --spl_downgrade
242      Force generate an SPL downgrade OTA. Only needed if target build has an
243      older SPL.
244
245  --vabc_compression_param
246      Compression algorithm to be used for VABC. Available options: gz, brotli, none
247"""
248
249from __future__ import print_function
250
251import logging
252import multiprocessing
253import os
254import os.path
255import re
256import shlex
257import shutil
258import struct
259import subprocess
260import sys
261import zipfile
262
263import care_map_pb2
264import common
265import ota_utils
266from ota_utils import (UNZIP_PATTERN, FinalizeMetadata, GetPackageMetadata,
267                       PropertyFiles, SECURITY_PATCH_LEVEL_PROP_NAME, GetZipEntryOffset)
268from common import IsSparseImage
269import target_files_diff
270from check_target_files_vintf import CheckVintfIfTrebleEnabled
271from non_ab_ota import GenerateNonAbOtaPackage
272
273if sys.hexversion < 0x02070000:
274  print("Python 2.7 or newer is required.", file=sys.stderr)
275  sys.exit(1)
276
277logger = logging.getLogger(__name__)
278
279OPTIONS = ota_utils.OPTIONS
280OPTIONS.verify = False
281OPTIONS.patch_threshold = 0.95
282OPTIONS.wipe_user_data = False
283OPTIONS.extra_script = None
284OPTIONS.worker_threads = multiprocessing.cpu_count() // 2
285if OPTIONS.worker_threads == 0:
286  OPTIONS.worker_threads = 1
287OPTIONS.two_step = False
288OPTIONS.include_secondary = False
289OPTIONS.block_based = True
290OPTIONS.updater_binary = None
291OPTIONS.oem_dicts = None
292OPTIONS.oem_source = None
293OPTIONS.oem_no_mount = False
294OPTIONS.full_radio = False
295OPTIONS.full_bootloader = False
296# Stash size cannot exceed cache_size * threshold.
297OPTIONS.cache_size = None
298OPTIONS.stash_threshold = 0.8
299OPTIONS.log_diff = None
300OPTIONS.payload_signer = None
301OPTIONS.payload_signer_args = []
302OPTIONS.payload_signer_maximum_signature_size = None
303OPTIONS.extracted_input = None
304OPTIONS.skip_postinstall = False
305OPTIONS.skip_compatibility_check = False
306OPTIONS.disable_fec_computation = False
307OPTIONS.disable_verity_computation = False
308OPTIONS.partial = None
309OPTIONS.custom_images = {}
310OPTIONS.disable_vabc = False
311OPTIONS.spl_downgrade = False
312OPTIONS.vabc_downgrade = False
313OPTIONS.enable_vabc_xor = True
314OPTIONS.force_minor_version = None
315OPTIONS.compressor_types = None
316OPTIONS.enable_zucchini = True
317OPTIONS.enable_lz4diff = False
318OPTIONS.vabc_compression_param = None
319
320POSTINSTALL_CONFIG = 'META/postinstall_config.txt'
321DYNAMIC_PARTITION_INFO = 'META/dynamic_partitions_info.txt'
322AB_PARTITIONS = 'META/ab_partitions.txt'
323
324# Files to be unzipped for target diffing purpose.
325TARGET_DIFFING_UNZIP_PATTERN = ['BOOT', 'RECOVERY', 'SYSTEM/*', 'VENDOR/*',
326                                'PRODUCT/*', 'SYSTEM_EXT/*', 'ODM/*',
327                                'VENDOR_DLKM/*', 'ODM_DLKM/*', 'SYSTEM_DLKM/*']
328RETROFIT_DAP_UNZIP_PATTERN = ['OTA/super_*.img', AB_PARTITIONS]
329
330# Images to be excluded from secondary payload. We essentially only keep
331# 'system_other' and bootloader partitions.
332SECONDARY_PAYLOAD_SKIPPED_IMAGES = [
333    'boot', 'dtbo', 'modem', 'odm', 'odm_dlkm', 'product', 'radio', 'recovery',
334    'system_dlkm', 'system_ext', 'vbmeta', 'vbmeta_system', 'vbmeta_vendor',
335    'vendor', 'vendor_boot']
336
337
338class PayloadSigner(object):
339  """A class that wraps the payload signing works.
340
341  When generating a Payload, hashes of the payload and metadata files will be
342  signed with the device key, either by calling an external payload signer or
343  by calling openssl with the package key. This class provides a unified
344  interface, so that callers can just call PayloadSigner.Sign().
345
346  If an external payload signer has been specified (OPTIONS.payload_signer), it
347  calls the signer with the provided args (OPTIONS.payload_signer_args). Note
348  that the signing key should be provided as part of the payload_signer_args.
349  Otherwise without an external signer, it uses the package key
350  (OPTIONS.package_key) and calls openssl for the signing works.
351  """
352
353  def __init__(self):
354    if OPTIONS.payload_signer is None:
355      # Prepare the payload signing key.
356      private_key = OPTIONS.package_key + OPTIONS.private_key_suffix
357      pw = OPTIONS.key_passwords[OPTIONS.package_key]
358
359      cmd = ["openssl", "pkcs8", "-in", private_key, "-inform", "DER"]
360      cmd.extend(["-passin", "pass:" + pw] if pw else ["-nocrypt"])
361      signing_key = common.MakeTempFile(prefix="key-", suffix=".key")
362      cmd.extend(["-out", signing_key])
363      common.RunAndCheckOutput(cmd, verbose=False)
364
365      self.signer = "openssl"
366      self.signer_args = ["pkeyutl", "-sign", "-inkey", signing_key,
367                          "-pkeyopt", "digest:sha256"]
368      self.maximum_signature_size = self._GetMaximumSignatureSizeInBytes(
369          signing_key)
370    else:
371      self.signer = OPTIONS.payload_signer
372      self.signer_args = OPTIONS.payload_signer_args
373      if OPTIONS.payload_signer_maximum_signature_size:
374        self.maximum_signature_size = int(
375            OPTIONS.payload_signer_maximum_signature_size)
376      else:
377        # The legacy config uses RSA2048 keys.
378        logger.warning("The maximum signature size for payload signer is not"
379                       " set, default to 256 bytes.")
380        self.maximum_signature_size = 256
381
382  @staticmethod
383  def _GetMaximumSignatureSizeInBytes(signing_key):
384    out_signature_size_file = common.MakeTempFile("signature_size")
385    cmd = ["delta_generator", "--out_maximum_signature_size_file={}".format(
386        out_signature_size_file), "--private_key={}".format(signing_key)]
387    common.RunAndCheckOutput(cmd)
388    with open(out_signature_size_file) as f:
389      signature_size = f.read().rstrip()
390    logger.info("%s outputs the maximum signature size: %s", cmd[0],
391                signature_size)
392    return int(signature_size)
393
394  def Sign(self, in_file):
395    """Signs the given input file. Returns the output filename."""
396    out_file = common.MakeTempFile(prefix="signed-", suffix=".bin")
397    cmd = [self.signer] + self.signer_args + ['-in', in_file, '-out', out_file]
398    common.RunAndCheckOutput(cmd)
399    return out_file
400
401
402class Payload(object):
403  """Manages the creation and the signing of an A/B OTA Payload."""
404
405  PAYLOAD_BIN = 'payload.bin'
406  PAYLOAD_PROPERTIES_TXT = 'payload_properties.txt'
407  SECONDARY_PAYLOAD_BIN = 'secondary/payload.bin'
408  SECONDARY_PAYLOAD_PROPERTIES_TXT = 'secondary/payload_properties.txt'
409
410  def __init__(self, secondary=False):
411    """Initializes a Payload instance.
412
413    Args:
414      secondary: Whether it's generating a secondary payload (default: False).
415    """
416    self.payload_file = None
417    self.payload_properties = None
418    self.secondary = secondary
419
420  def _Run(self, cmd):  # pylint: disable=no-self-use
421    # Don't pipe (buffer) the output if verbose is set. Let
422    # brillo_update_payload write to stdout/stderr directly, so its progress can
423    # be monitored.
424    if OPTIONS.verbose:
425      common.RunAndCheckOutput(cmd, stdout=None, stderr=None)
426    else:
427      common.RunAndCheckOutput(cmd)
428
429  def Generate(self, target_file, source_file=None, additional_args=None):
430    """Generates a payload from the given target-files zip(s).
431
432    Args:
433      target_file: The filename of the target build target-files zip.
434      source_file: The filename of the source build target-files zip; or None if
435          generating a full OTA.
436      additional_args: A list of additional args that should be passed to
437          brillo_update_payload script; or None.
438    """
439    if additional_args is None:
440      additional_args = []
441
442    payload_file = common.MakeTempFile(prefix="payload-", suffix=".bin")
443    cmd = ["brillo_update_payload", "generate",
444           "--payload", payload_file,
445           "--target_image", target_file]
446    if source_file is not None:
447      cmd.extend(["--source_image", source_file])
448      if OPTIONS.disable_fec_computation:
449        cmd.extend(["--disable_fec_computation", "true"])
450      if OPTIONS.disable_verity_computation:
451        cmd.extend(["--disable_verity_computation", "true"])
452    cmd.extend(additional_args)
453    self._Run(cmd)
454
455    self.payload_file = payload_file
456    self.payload_properties = None
457
458  def Sign(self, payload_signer):
459    """Generates and signs the hashes of the payload and metadata.
460
461    Args:
462      payload_signer: A PayloadSigner() instance that serves the signing work.
463
464    Raises:
465      AssertionError: On any failure when calling brillo_update_payload script.
466    """
467    assert isinstance(payload_signer, PayloadSigner)
468
469    # 1. Generate hashes of the payload and metadata files.
470    payload_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
471    metadata_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
472    cmd = ["brillo_update_payload", "hash",
473           "--unsigned_payload", self.payload_file,
474           "--signature_size", str(payload_signer.maximum_signature_size),
475           "--metadata_hash_file", metadata_sig_file,
476           "--payload_hash_file", payload_sig_file]
477    self._Run(cmd)
478
479    # 2. Sign the hashes.
480    signed_payload_sig_file = payload_signer.Sign(payload_sig_file)
481    signed_metadata_sig_file = payload_signer.Sign(metadata_sig_file)
482
483    # 3. Insert the signatures back into the payload file.
484    signed_payload_file = common.MakeTempFile(prefix="signed-payload-",
485                                              suffix=".bin")
486    cmd = ["brillo_update_payload", "sign",
487           "--unsigned_payload", self.payload_file,
488           "--payload", signed_payload_file,
489           "--signature_size", str(payload_signer.maximum_signature_size),
490           "--metadata_signature_file", signed_metadata_sig_file,
491           "--payload_signature_file", signed_payload_sig_file]
492    self._Run(cmd)
493
494    # 4. Dump the signed payload properties.
495    properties_file = common.MakeTempFile(prefix="payload-properties-",
496                                          suffix=".txt")
497    cmd = ["brillo_update_payload", "properties",
498           "--payload", signed_payload_file,
499           "--properties_file", properties_file]
500    self._Run(cmd)
501
502    if self.secondary:
503      with open(properties_file, "a") as f:
504        f.write("SWITCH_SLOT_ON_REBOOT=0\n")
505
506    if OPTIONS.wipe_user_data:
507      with open(properties_file, "a") as f:
508        f.write("POWERWASH=1\n")
509
510    self.payload_file = signed_payload_file
511    self.payload_properties = properties_file
512
513  def WriteToZip(self, output_zip):
514    """Writes the payload to the given zip.
515
516    Args:
517      output_zip: The output ZipFile instance.
518    """
519    assert self.payload_file is not None
520    assert self.payload_properties is not None
521
522    if self.secondary:
523      payload_arcname = Payload.SECONDARY_PAYLOAD_BIN
524      payload_properties_arcname = Payload.SECONDARY_PAYLOAD_PROPERTIES_TXT
525    else:
526      payload_arcname = Payload.PAYLOAD_BIN
527      payload_properties_arcname = Payload.PAYLOAD_PROPERTIES_TXT
528
529    # Add the signed payload file and properties into the zip. In order to
530    # support streaming, we pack them as ZIP_STORED. So these entries can be
531    # read directly with the offset and length pairs.
532    common.ZipWrite(output_zip, self.payload_file, arcname=payload_arcname,
533                    compress_type=zipfile.ZIP_STORED)
534    common.ZipWrite(output_zip, self.payload_properties,
535                    arcname=payload_properties_arcname,
536                    compress_type=zipfile.ZIP_STORED)
537
538
539def _LoadOemDicts(oem_source):
540  """Returns the list of loaded OEM properties dict."""
541  if not oem_source:
542    return None
543
544  oem_dicts = []
545  for oem_file in oem_source:
546    oem_dicts.append(common.LoadDictionaryFromFile(oem_file))
547  return oem_dicts
548
549
550class StreamingPropertyFiles(PropertyFiles):
551  """A subclass for computing the property-files for streaming A/B OTAs."""
552
553  def __init__(self):
554    super(StreamingPropertyFiles, self).__init__()
555    self.name = 'ota-streaming-property-files'
556    self.required = (
557        # payload.bin and payload_properties.txt must exist.
558        'payload.bin',
559        'payload_properties.txt',
560    )
561    self.optional = (
562        # apex_info.pb isn't directly used in the update flow
563        'apex_info.pb',
564        # care_map is available only if dm-verity is enabled.
565        'care_map.pb',
566        'care_map.txt',
567        # compatibility.zip is available only if target supports Treble.
568        'compatibility.zip',
569    )
570
571
572class AbOtaPropertyFiles(StreamingPropertyFiles):
573  """The property-files for A/B OTA that includes payload_metadata.bin info.
574
575  Since P, we expose one more token (aka property-file), in addition to the ones
576  for streaming A/B OTA, for a virtual entry of 'payload_metadata.bin'.
577  'payload_metadata.bin' is the header part of a payload ('payload.bin'), which
578  doesn't exist as a separate ZIP entry, but can be used to verify if the
579  payload can be applied on the given device.
580
581  For backward compatibility, we keep both of the 'ota-streaming-property-files'
582  and the newly added 'ota-property-files' in P. The new token will only be
583  available in 'ota-property-files'.
584  """
585
586  def __init__(self):
587    super(AbOtaPropertyFiles, self).__init__()
588    self.name = 'ota-property-files'
589
590  def _GetPrecomputed(self, input_zip):
591    offset, size = self._GetPayloadMetadataOffsetAndSize(input_zip)
592    return ['payload_metadata.bin:{}:{}'.format(offset, size)]
593
594  @staticmethod
595  def _GetPayloadMetadataOffsetAndSize(input_zip):
596    """Computes the offset and size of the payload metadata for a given package.
597
598    (From system/update_engine/update_metadata.proto)
599    A delta update file contains all the deltas needed to update a system from
600    one specific version to another specific version. The update format is
601    represented by this struct pseudocode:
602
603    struct delta_update_file {
604      char magic[4] = "CrAU";
605      uint64 file_format_version;
606      uint64 manifest_size;  // Size of protobuf DeltaArchiveManifest
607
608      // Only present if format_version > 1:
609      uint32 metadata_signature_size;
610
611      // The Bzip2 compressed DeltaArchiveManifest
612      char manifest[metadata_signature_size];
613
614      // The signature of the metadata (from the beginning of the payload up to
615      // this location, not including the signature itself). This is a
616      // serialized Signatures message.
617      char medatada_signature_message[metadata_signature_size];
618
619      // Data blobs for files, no specific format. The specific offset
620      // and length of each data blob is recorded in the DeltaArchiveManifest.
621      struct {
622        char data[];
623      } blobs[];
624
625      // These two are not signed:
626      uint64 payload_signatures_message_size;
627      char payload_signatures_message[];
628    };
629
630    'payload-metadata.bin' contains all the bytes from the beginning of the
631    payload, till the end of 'medatada_signature_message'.
632    """
633    payload_info = input_zip.getinfo('payload.bin')
634    (payload_offset, payload_size) = GetZipEntryOffset(input_zip, payload_info)
635
636    # Read the underlying raw zipfile at specified offset
637    payload_fp = input_zip.fp
638    payload_fp.seek(payload_offset)
639    header_bin = payload_fp.read(24)
640
641    # network byte order (big-endian)
642    header = struct.unpack("!IQQL", header_bin)
643
644    # 'CrAU'
645    magic = header[0]
646    assert magic == 0x43724155, "Invalid magic: {:x}, computed offset {}" \
647        .format(magic, payload_offset)
648
649    manifest_size = header[2]
650    metadata_signature_size = header[3]
651    metadata_total = 24 + manifest_size + metadata_signature_size
652    assert metadata_total < payload_size
653
654    return (payload_offset, metadata_total)
655
656
657def ModifyVABCCompressionParam(content, algo):
658  """ Update update VABC Compression Param in dynamic_partitions_info.txt
659  Args:
660    content: The string content of dynamic_partitions_info.txt
661    algo: The compression algorithm should be used for VABC. See
662          https://cs.android.com/android/platform/superproject/+/master:system/core/fs_mgr/libsnapshot/cow_writer.cpp;l=127;bpv=1;bpt=1?q=CowWriter::ParseOptions&sq=
663  Returns:
664    Updated content of dynamic_partitions_info.txt , with custom compression algo
665  """
666  output_list = []
667  for line in content.splitlines():
668    if line.startswith("virtual_ab_compression_method="):
669      continue
670    output_list.append(line)
671  output_list.append("virtual_ab_compression_method="+algo)
672  return "\n".join(output_list)
673
674
675def UpdatesInfoForSpecialUpdates(content, partitions_filter,
676                                 delete_keys=None):
677  """ Updates info file for secondary payload generation, partial update, etc.
678
679    Scan each line in the info file, and remove the unwanted partitions from
680    the dynamic partition list in the related properties. e.g.
681    "super_google_dynamic_partitions_partition_list=system vendor product"
682    will become "super_google_dynamic_partitions_partition_list=system".
683
684  Args:
685    content: The content of the input info file. e.g. misc_info.txt.
686    partitions_filter: A function to filter the desired partitions from a given
687      list
688    delete_keys: A list of keys to delete in the info file
689
690  Returns:
691    A string of the updated info content.
692  """
693
694  output_list = []
695  # The suffix in partition_list variables that follows the name of the
696  # partition group.
697  list_suffix = 'partition_list'
698  for line in content.splitlines():
699    if line.startswith('#') or '=' not in line:
700      output_list.append(line)
701      continue
702    key, value = line.strip().split('=', 1)
703
704    if delete_keys and key in delete_keys:
705      pass
706    elif key.endswith(list_suffix):
707      partitions = value.split()
708      # TODO for partial update, partitions in the same group must be all
709      # updated or all omitted
710      partitions = filter(partitions_filter, partitions)
711      output_list.append('{}={}'.format(key, ' '.join(partitions)))
712    else:
713      output_list.append(line)
714  return '\n'.join(output_list)
715
716
717def GetTargetFilesZipForSecondaryImages(input_file, skip_postinstall=False):
718  """Returns a target-files.zip file for generating secondary payload.
719
720  Although the original target-files.zip already contains secondary slot
721  images (i.e. IMAGES/system_other.img), we need to rename the files to the
722  ones without _other suffix. Note that we cannot instead modify the names in
723  META/ab_partitions.txt, because there are no matching partitions on device.
724
725  For the partitions that don't have secondary images, the ones for primary
726  slot will be used. This is to ensure that we always have valid boot, vbmeta,
727  bootloader images in the inactive slot.
728
729  Args:
730    input_file: The input target-files.zip file.
731    skip_postinstall: Whether to skip copying the postinstall config file.
732
733  Returns:
734    The filename of the target-files.zip for generating secondary payload.
735  """
736
737  def GetInfoForSecondaryImages(info_file):
738    """Updates info file for secondary payload generation."""
739    with open(info_file) as f:
740      content = f.read()
741    # Remove virtual_ab flag from secondary payload so that OTA client
742    # don't use snapshots for secondary update
743    delete_keys = ['virtual_ab', "virtual_ab_retrofit"]
744    return UpdatesInfoForSpecialUpdates(
745        content, lambda p: p not in SECONDARY_PAYLOAD_SKIPPED_IMAGES,
746        delete_keys)
747
748  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
749  target_zip = zipfile.ZipFile(target_file, 'w', allowZip64=True)
750
751  with zipfile.ZipFile(input_file, 'r', allowZip64=True) as input_zip:
752    infolist = input_zip.infolist()
753
754  input_tmp = common.UnzipTemp(input_file, UNZIP_PATTERN)
755  for info in infolist:
756    unzipped_file = os.path.join(input_tmp, *info.filename.split('/'))
757    if info.filename == 'IMAGES/system_other.img':
758      common.ZipWrite(target_zip, unzipped_file, arcname='IMAGES/system.img')
759
760    # Primary images and friends need to be skipped explicitly.
761    elif info.filename in ('IMAGES/system.img',
762                           'IMAGES/system.map'):
763      pass
764
765    # Copy images that are not in SECONDARY_PAYLOAD_SKIPPED_IMAGES.
766    elif info.filename.startswith(('IMAGES/', 'RADIO/')):
767      image_name = os.path.basename(info.filename)
768      if image_name not in ['{}.img'.format(partition) for partition in
769                            SECONDARY_PAYLOAD_SKIPPED_IMAGES]:
770        common.ZipWrite(target_zip, unzipped_file, arcname=info.filename)
771
772    # Skip copying the postinstall config if requested.
773    elif skip_postinstall and info.filename == POSTINSTALL_CONFIG:
774      pass
775
776    elif info.filename.startswith('META/'):
777      # Remove the unnecessary partitions for secondary images from the
778      # ab_partitions file.
779      if info.filename == AB_PARTITIONS:
780        with open(unzipped_file) as f:
781          partition_list = f.read().splitlines()
782        partition_list = [partition for partition in partition_list if partition
783                          and partition not in SECONDARY_PAYLOAD_SKIPPED_IMAGES]
784        common.ZipWriteStr(target_zip, info.filename,
785                           '\n'.join(partition_list))
786      # Remove the unnecessary partitions from the dynamic partitions list.
787      elif (info.filename == 'META/misc_info.txt' or
788            info.filename == DYNAMIC_PARTITION_INFO):
789        modified_info = GetInfoForSecondaryImages(unzipped_file)
790        common.ZipWriteStr(target_zip, info.filename, modified_info)
791      else:
792        common.ZipWrite(target_zip, unzipped_file, arcname=info.filename)
793
794  common.ZipClose(target_zip)
795
796  return target_file
797
798
799def GetTargetFilesZipWithoutPostinstallConfig(input_file):
800  """Returns a target-files.zip that's not containing postinstall_config.txt.
801
802  This allows brillo_update_payload script to skip writing all the postinstall
803  hooks in the generated payload. The input target-files.zip file will be
804  duplicated, with 'META/postinstall_config.txt' skipped. If input_file doesn't
805  contain the postinstall_config.txt entry, the input file will be returned.
806
807  Args:
808    input_file: The input target-files.zip filename.
809
810  Returns:
811    The filename of target-files.zip that doesn't contain postinstall config.
812  """
813  # We should only make a copy if postinstall_config entry exists.
814  with zipfile.ZipFile(input_file, 'r', allowZip64=True) as input_zip:
815    if POSTINSTALL_CONFIG not in input_zip.namelist():
816      return input_file
817
818  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
819  shutil.copyfile(input_file, target_file)
820  common.ZipDelete(target_file, POSTINSTALL_CONFIG)
821  return target_file
822
823
824def ParseInfoDict(target_file_path):
825  with zipfile.ZipFile(target_file_path, 'r', allowZip64=True) as zfp:
826    return common.LoadInfoDict(zfp)
827
828
829def GetTargetFilesZipForCustomVABCCompression(input_file, vabc_compression_param):
830  """Returns a target-files.zip with a custom VABC compression param.
831  Args:
832    input_file: The input target-files.zip path
833    vabc_compression_param: Custom Virtual AB Compression algorithm
834
835  Returns:
836    The path to modified target-files.zip
837  """
838  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
839  shutil.copyfile(input_file, target_file)
840  common.ZipDelete(target_file, DYNAMIC_PARTITION_INFO)
841  with zipfile.ZipFile(input_file, 'r', allowZip64=True) as zfp:
842    dynamic_partition_info = zfp.read(DYNAMIC_PARTITION_INFO).decode()
843    dynamic_partition_info = ModifyVABCCompressionParam(
844        dynamic_partition_info, vabc_compression_param)
845    with zipfile.ZipFile(target_file, "a", allowZip64=True) as output_zip:
846      output_zip.writestr(DYNAMIC_PARTITION_INFO, dynamic_partition_info)
847  return target_file
848
849
850def GetTargetFilesZipForPartialUpdates(input_file, ab_partitions):
851  """Returns a target-files.zip for partial ota update package generation.
852
853  This function modifies ab_partitions list with the desired partitions before
854  calling the brillo_update_payload script. It also cleans up the reference to
855  the excluded partitions in the info file, e.g misc_info.txt.
856
857  Args:
858    input_file: The input target-files.zip filename.
859    ab_partitions: A list of partitions to include in the partial update
860
861  Returns:
862    The filename of target-files.zip used for partial ota update.
863  """
864
865  def AddImageForPartition(partition_name):
866    """Add the archive name for a given partition to the copy list."""
867    for prefix in ['IMAGES', 'RADIO']:
868      image_path = '{}/{}.img'.format(prefix, partition_name)
869      if image_path in namelist:
870        copy_entries.append(image_path)
871        map_path = '{}/{}.map'.format(prefix, partition_name)
872        if map_path in namelist:
873          copy_entries.append(map_path)
874        return
875
876    raise ValueError("Cannot find {} in input zipfile".format(partition_name))
877
878  with zipfile.ZipFile(input_file, allowZip64=True) as input_zip:
879    original_ab_partitions = input_zip.read(
880        AB_PARTITIONS).decode().splitlines()
881    namelist = input_zip.namelist()
882
883  unrecognized_partitions = [partition for partition in ab_partitions if
884                             partition not in original_ab_partitions]
885  if unrecognized_partitions:
886    raise ValueError("Unrecognized partitions when generating partial updates",
887                     unrecognized_partitions)
888
889  logger.info("Generating partial updates for %s", ab_partitions)
890
891  copy_entries = ['META/update_engine_config.txt']
892  for partition_name in ab_partitions:
893    AddImageForPartition(partition_name)
894
895  # Use zip2zip to avoid extracting the zipfile.
896  partial_target_file = common.MakeTempFile(suffix='.zip')
897  cmd = ['zip2zip', '-i', input_file, '-o', partial_target_file]
898  cmd.extend(['{}:{}'.format(name, name) for name in copy_entries])
899  common.RunAndCheckOutput(cmd)
900
901  partial_target_zip = zipfile.ZipFile(partial_target_file, 'a',
902                                       allowZip64=True)
903  with zipfile.ZipFile(input_file, allowZip64=True) as input_zip:
904    common.ZipWriteStr(partial_target_zip, 'META/ab_partitions.txt',
905                       '\n'.join(ab_partitions))
906    CARE_MAP_ENTRY = "META/care_map.pb"
907    if CARE_MAP_ENTRY in input_zip.namelist():
908      caremap = care_map_pb2.CareMap()
909      caremap.ParseFromString(input_zip.read(CARE_MAP_ENTRY))
910      filtered = [
911          part for part in caremap.partitions if part.name in ab_partitions]
912      del caremap.partitions[:]
913      caremap.partitions.extend(filtered)
914      common.ZipWriteStr(partial_target_zip, CARE_MAP_ENTRY,
915                         caremap.SerializeToString())
916
917    for info_file in ['META/misc_info.txt', DYNAMIC_PARTITION_INFO]:
918      if info_file not in input_zip.namelist():
919        logger.warning('Cannot find %s in input zipfile', info_file)
920        continue
921      content = input_zip.read(info_file).decode()
922      modified_info = UpdatesInfoForSpecialUpdates(
923          content, lambda p: p in ab_partitions)
924      if OPTIONS.vabc_compression_param and info_file == DYNAMIC_PARTITION_INFO:
925        modified_info = ModifyVABCCompressionParam(
926            modified_info, OPTIONS.vabc_compression_param)
927      common.ZipWriteStr(partial_target_zip, info_file, modified_info)
928
929    # TODO(xunchang) handle META/postinstall_config.txt'
930
931  common.ZipClose(partial_target_zip)
932
933  return partial_target_file
934
935
936def GetTargetFilesZipForRetrofitDynamicPartitions(input_file,
937                                                  super_block_devices,
938                                                  dynamic_partition_list):
939  """Returns a target-files.zip for retrofitting dynamic partitions.
940
941  This allows brillo_update_payload to generate an OTA based on the exact
942  bits on the block devices. Postinstall is disabled.
943
944  Args:
945    input_file: The input target-files.zip filename.
946    super_block_devices: The list of super block devices
947    dynamic_partition_list: The list of dynamic partitions
948
949  Returns:
950    The filename of target-files.zip with *.img replaced with super_*.img for
951    each block device in super_block_devices.
952  """
953  assert super_block_devices, "No super_block_devices are specified."
954
955  replace = {'OTA/super_{}.img'.format(dev): 'IMAGES/{}.img'.format(dev)
956             for dev in super_block_devices}
957
958  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
959  shutil.copyfile(input_file, target_file)
960
961  with zipfile.ZipFile(input_file, allowZip64=True) as input_zip:
962    namelist = input_zip.namelist()
963
964  input_tmp = common.UnzipTemp(input_file, RETROFIT_DAP_UNZIP_PATTERN)
965
966  # Remove partitions from META/ab_partitions.txt that is in
967  # dynamic_partition_list but not in super_block_devices so that
968  # brillo_update_payload won't generate update for those logical partitions.
969  ab_partitions_file = os.path.join(input_tmp, *AB_PARTITIONS.split('/'))
970  with open(ab_partitions_file) as f:
971    ab_partitions_lines = f.readlines()
972    ab_partitions = [line.strip() for line in ab_partitions_lines]
973  # Assert that all super_block_devices are in ab_partitions
974  super_device_not_updated = [partition for partition in super_block_devices
975                              if partition not in ab_partitions]
976  assert not super_device_not_updated, \
977      "{} is in super_block_devices but not in {}".format(
978          super_device_not_updated, AB_PARTITIONS)
979  # ab_partitions -= (dynamic_partition_list - super_block_devices)
980  new_ab_partitions = common.MakeTempFile(
981      prefix="ab_partitions", suffix=".txt")
982  with open(new_ab_partitions, 'w') as f:
983    for partition in ab_partitions:
984      if (partition in dynamic_partition_list and
985              partition not in super_block_devices):
986        logger.info("Dropping %s from ab_partitions.txt", partition)
987        continue
988      f.write(partition + "\n")
989  to_delete = [AB_PARTITIONS]
990
991  # Always skip postinstall for a retrofit update.
992  to_delete += [POSTINSTALL_CONFIG]
993
994  # Delete dynamic_partitions_info.txt so that brillo_update_payload thinks this
995  # is a regular update on devices without dynamic partitions support.
996  to_delete += [DYNAMIC_PARTITION_INFO]
997
998  # Remove the existing partition images as well as the map files.
999  to_delete += list(replace.values())
1000  to_delete += ['IMAGES/{}.map'.format(dev) for dev in super_block_devices]
1001
1002  common.ZipDelete(target_file, to_delete)
1003
1004  target_zip = zipfile.ZipFile(target_file, 'a', allowZip64=True)
1005
1006  # Write super_{foo}.img as {foo}.img.
1007  for src, dst in replace.items():
1008    assert src in namelist, \
1009        'Missing {} in {}; {} cannot be written'.format(src, input_file, dst)
1010    unzipped_file = os.path.join(input_tmp, *src.split('/'))
1011    common.ZipWrite(target_zip, unzipped_file, arcname=dst)
1012
1013  # Write new ab_partitions.txt file
1014  common.ZipWrite(target_zip, new_ab_partitions, arcname=AB_PARTITIONS)
1015
1016  common.ZipClose(target_zip)
1017
1018  return target_file
1019
1020
1021def GetTargetFilesZipForCustomImagesUpdates(input_file, custom_images):
1022  """Returns a target-files.zip for custom partitions update.
1023
1024  This function modifies ab_partitions list with the desired custom partitions
1025  and puts the custom images into the target target-files.zip.
1026
1027  Args:
1028    input_file: The input target-files.zip filename.
1029    custom_images: A map of custom partitions and custom images.
1030
1031  Returns:
1032    The filename of a target-files.zip which has renamed the custom images in
1033    the IMAGS/ to their partition names.
1034  """
1035  # Use zip2zip to avoid extracting the zipfile.
1036  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
1037  cmd = ['zip2zip', '-i', input_file, '-o', target_file]
1038
1039  with zipfile.ZipFile(input_file, allowZip64=True) as input_zip:
1040    namelist = input_zip.namelist()
1041
1042  # Write {custom_image}.img as {custom_partition}.img.
1043  for custom_partition, custom_image in custom_images.items():
1044    default_custom_image = '{}.img'.format(custom_partition)
1045    if default_custom_image != custom_image:
1046      logger.info("Update custom partition '%s' with '%s'",
1047                  custom_partition, custom_image)
1048      # Default custom image need to be deleted first.
1049      namelist.remove('IMAGES/{}'.format(default_custom_image))
1050      # IMAGES/{custom_image}.img:IMAGES/{custom_partition}.img.
1051      cmd.extend(['IMAGES/{}:IMAGES/{}'.format(custom_image,
1052                                               default_custom_image)])
1053
1054  cmd.extend(['{}:{}'.format(name, name) for name in namelist])
1055  common.RunAndCheckOutput(cmd)
1056
1057  return target_file
1058
1059
1060def GeneratePartitionTimestampFlags(partition_state):
1061  partition_timestamps = [
1062      part.partition_name + ":" + part.version
1063      for part in partition_state]
1064  return ["--partition_timestamps", ",".join(partition_timestamps)]
1065
1066
1067def GeneratePartitionTimestampFlagsDowngrade(
1068        pre_partition_state, post_partition_state):
1069  assert pre_partition_state is not None
1070  partition_timestamps = {}
1071  for part in post_partition_state:
1072    partition_timestamps[part.partition_name] = part.version
1073  for part in pre_partition_state:
1074    if part.partition_name in partition_timestamps:
1075      partition_timestamps[part.partition_name] = \
1076        max(part.version, partition_timestamps[part.partition_name])
1077  return [
1078      "--partition_timestamps",
1079      ",".join([key + ":" + val for (key, val)
1080                in partition_timestamps.items()])
1081  ]
1082
1083
1084def SupportsMainlineGkiUpdates(target_file):
1085  """Return True if the build supports MainlineGKIUpdates.
1086
1087  This function scans the product.img file in IMAGES/ directory for
1088  pattern |*/apex/com.android.gki.*.apex|. If there are files
1089  matching this pattern, conclude that build supports mainline
1090  GKI and return True
1091
1092  Args:
1093    target_file: Path to a target_file.zip, or an extracted directory
1094  Return:
1095    True if thisb uild supports Mainline GKI Updates.
1096  """
1097  if target_file is None:
1098    return False
1099  if os.path.isfile(target_file):
1100    target_file = common.UnzipTemp(target_file, ["IMAGES/product.img"])
1101  if not os.path.isdir(target_file):
1102    assert os.path.isdir(target_file), \
1103        "{} must be a path to zip archive or dir containing extracted"\
1104        " target_files".format(target_file)
1105  image_file = os.path.join(target_file, "IMAGES", "product.img")
1106
1107  if not os.path.isfile(image_file):
1108    return False
1109
1110  if IsSparseImage(image_file):
1111    # Unsparse the image
1112    tmp_img = common.MakeTempFile(suffix=".img")
1113    subprocess.check_output(["simg2img", image_file, tmp_img])
1114    image_file = tmp_img
1115
1116  cmd = ["debugfs_static", "-R", "ls -p /apex", image_file]
1117  output = subprocess.check_output(cmd).decode()
1118
1119  pattern = re.compile(r"com\.android\.gki\..*\.apex")
1120  return pattern.search(output) is not None
1121
1122
1123def GenerateAbOtaPackage(target_file, output_file, source_file=None):
1124  """Generates an Android OTA package that has A/B update payload."""
1125  # Stage the output zip package for package signing.
1126  if not OPTIONS.no_signing:
1127    staging_file = common.MakeTempFile(suffix='.zip')
1128  else:
1129    staging_file = output_file
1130  output_zip = zipfile.ZipFile(staging_file, "w",
1131                               compression=zipfile.ZIP_DEFLATED,
1132                               allowZip64=True)
1133
1134  if source_file is not None:
1135    assert "ab_partitions" in OPTIONS.source_info_dict, \
1136        "META/ab_partitions.txt is required for ab_update."
1137    assert "ab_partitions" in OPTIONS.target_info_dict, \
1138        "META/ab_partitions.txt is required for ab_update."
1139    target_info = common.BuildInfo(OPTIONS.target_info_dict, OPTIONS.oem_dicts)
1140    source_info = common.BuildInfo(OPTIONS.source_info_dict, OPTIONS.oem_dicts)
1141    # If source supports VABC, delta_generator/update_engine will attempt to
1142    # use VABC. This dangerous, as the target build won't have snapuserd to
1143    # serve I/O request when device boots. Therefore, disable VABC if source
1144    # build doesn't supports it.
1145    if not source_info.is_vabc or not target_info.is_vabc:
1146      logger.info("Either source or target does not support VABC, disabling.")
1147      OPTIONS.disable_vabc = True
1148
1149  else:
1150    assert "ab_partitions" in OPTIONS.info_dict, \
1151        "META/ab_partitions.txt is required for ab_update."
1152    target_info = common.BuildInfo(OPTIONS.info_dict, OPTIONS.oem_dicts)
1153    source_info = None
1154
1155  if target_info.vendor_suppressed_vabc:
1156    logger.info("Vendor suppressed VABC. Disabling")
1157    OPTIONS.disable_vabc = True
1158
1159  # Both source and target build need to support VABC XOR for us to use it.
1160  # Source build's update_engine must be able to write XOR ops, and target
1161  # build's snapuserd must be able to interpret XOR ops.
1162  if not target_info.is_vabc_xor or OPTIONS.disable_vabc or \
1163          (source_info is not None and not source_info.is_vabc_xor):
1164    logger.info("VABC XOR Not supported, disabling")
1165    OPTIONS.enable_vabc_xor = False
1166  additional_args = []
1167
1168  # Prepare custom images.
1169  if OPTIONS.custom_images:
1170    target_file = GetTargetFilesZipForCustomImagesUpdates(
1171        target_file, OPTIONS.custom_images)
1172
1173  if OPTIONS.retrofit_dynamic_partitions:
1174    target_file = GetTargetFilesZipForRetrofitDynamicPartitions(
1175        target_file, target_info.get("super_block_devices").strip().split(),
1176        target_info.get("dynamic_partition_list").strip().split())
1177  elif OPTIONS.partial:
1178    target_file = GetTargetFilesZipForPartialUpdates(target_file,
1179                                                     OPTIONS.partial)
1180    additional_args += ["--is_partial_update", "true"]
1181  elif OPTIONS.vabc_compression_param:
1182    target_file = GetTargetFilesZipForCustomVABCCompression(
1183        target_file, OPTIONS.vabc_compression_param)
1184  elif OPTIONS.skip_postinstall:
1185    target_file = GetTargetFilesZipWithoutPostinstallConfig(target_file)
1186  # Target_file may have been modified, reparse ab_partitions
1187  with zipfile.ZipFile(target_file, allowZip64=True) as zfp:
1188    target_info.info_dict['ab_partitions'] = zfp.read(
1189        AB_PARTITIONS).decode().strip().split("\n")
1190
1191  CheckVintfIfTrebleEnabled(target_file, target_info)
1192
1193  # Metadata to comply with Android OTA package format.
1194  metadata = GetPackageMetadata(target_info, source_info)
1195  # Generate payload.
1196  payload = Payload()
1197
1198  partition_timestamps_flags = []
1199  # Enforce a max timestamp this payload can be applied on top of.
1200  if OPTIONS.downgrade:
1201    max_timestamp = source_info.GetBuildProp("ro.build.date.utc")
1202    partition_timestamps_flags = GeneratePartitionTimestampFlagsDowngrade(
1203        metadata.precondition.partition_state,
1204        metadata.postcondition.partition_state
1205    )
1206  else:
1207    max_timestamp = str(metadata.postcondition.timestamp)
1208    partition_timestamps_flags = GeneratePartitionTimestampFlags(
1209        metadata.postcondition.partition_state)
1210
1211  if not ota_utils.IsZucchiniCompatible(source_file, target_file):
1212    OPTIONS.enable_zucchini = False
1213
1214  additional_args += ["--enable_zucchini",
1215                      str(OPTIONS.enable_zucchini).lower()]
1216
1217  if not ota_utils.IsLz4diffCompatible(source_file, target_file):
1218    logger.warning(
1219        "Source build doesn't support lz4diff, or source/target don't have compatible lz4diff versions. Disabling lz4diff.")
1220    OPTIONS.enable_lz4diff = False
1221
1222  additional_args += ["--enable_lz4diff",
1223                      str(OPTIONS.enable_lz4diff).lower()]
1224
1225  if source_file and OPTIONS.enable_lz4diff:
1226    input_tmp = common.UnzipTemp(source_file, ["META/liblz4.so"])
1227    liblz4_path = os.path.join(input_tmp, "META", "liblz4.so")
1228    assert os.path.exists(
1229        liblz4_path), "liblz4.so not found in META/ dir of target file {}".format(liblz4_path)
1230    logger.info("Enabling lz4diff %s", liblz4_path)
1231    additional_args += ["--liblz4_path", liblz4_path]
1232    erofs_compression_param = OPTIONS.target_info_dict.get(
1233        "erofs_default_compressor")
1234    assert erofs_compression_param is not None, "'erofs_default_compressor' not found in META/misc_info.txt of target build. This is required to enable lz4diff."
1235    additional_args += ["--erofs_compression_param", erofs_compression_param]
1236
1237  if OPTIONS.disable_vabc:
1238    additional_args += ["--disable_vabc", "true"]
1239  if OPTIONS.enable_vabc_xor:
1240    additional_args += ["--enable_vabc_xor", "true"]
1241  if OPTIONS.force_minor_version:
1242    additional_args += ["--force_minor_version", OPTIONS.force_minor_version]
1243  if OPTIONS.compressor_types:
1244    additional_args += ["--compressor_types", OPTIONS.compressor_types]
1245  additional_args += ["--max_timestamp", max_timestamp]
1246
1247  if SupportsMainlineGkiUpdates(source_file):
1248    logger.warning(
1249        "Detected build with mainline GKI, include full boot image.")
1250    additional_args.extend(["--full_boot", "true"])
1251
1252  payload.Generate(
1253      target_file,
1254      source_file,
1255      additional_args + partition_timestamps_flags
1256  )
1257
1258  # Sign the payload.
1259  payload_signer = PayloadSigner()
1260  payload.Sign(payload_signer)
1261
1262  # Write the payload into output zip.
1263  payload.WriteToZip(output_zip)
1264
1265  # Generate and include the secondary payload that installs secondary images
1266  # (e.g. system_other.img).
1267  if OPTIONS.include_secondary:
1268    # We always include a full payload for the secondary slot, even when
1269    # building an incremental OTA. See the comments for "--include_secondary".
1270    secondary_target_file = GetTargetFilesZipForSecondaryImages(
1271        target_file, OPTIONS.skip_postinstall)
1272    secondary_payload = Payload(secondary=True)
1273    secondary_payload.Generate(secondary_target_file,
1274                               additional_args=["--max_timestamp",
1275                                                max_timestamp])
1276    secondary_payload.Sign(payload_signer)
1277    secondary_payload.WriteToZip(output_zip)
1278
1279  # If dm-verity is supported for the device, copy contents of care_map
1280  # into A/B OTA package.
1281  target_zip = zipfile.ZipFile(target_file, "r", allowZip64=True)
1282  if (target_info.get("verity") == "true" or
1283          target_info.get("avb_enable") == "true"):
1284    care_map_list = [x for x in ["care_map.pb", "care_map.txt"] if
1285                     "META/" + x in target_zip.namelist()]
1286
1287    # Adds care_map if either the protobuf format or the plain text one exists.
1288    if care_map_list:
1289      care_map_name = care_map_list[0]
1290      care_map_data = target_zip.read("META/" + care_map_name)
1291      # In order to support streaming, care_map needs to be packed as
1292      # ZIP_STORED.
1293      common.ZipWriteStr(output_zip, care_map_name, care_map_data,
1294                         compress_type=zipfile.ZIP_STORED)
1295    else:
1296      logger.warning("Cannot find care map file in target_file package")
1297
1298  # Add the source apex version for incremental ota updates, and write the
1299  # result apex info to the ota package.
1300  ota_apex_info = ota_utils.ConstructOtaApexInfo(target_zip, source_file)
1301  if ota_apex_info is not None:
1302    common.ZipWriteStr(output_zip, "apex_info.pb", ota_apex_info,
1303                       compress_type=zipfile.ZIP_STORED)
1304
1305  common.ZipClose(target_zip)
1306
1307  # We haven't written the metadata entry yet, which will be handled in
1308  # FinalizeMetadata().
1309  common.ZipClose(output_zip)
1310
1311  # AbOtaPropertyFiles intends to replace StreamingPropertyFiles, as it covers
1312  # all the info of the latter. However, system updaters and OTA servers need to
1313  # take time to switch to the new flag. We keep both of the flags for
1314  # P-timeframe, and will remove StreamingPropertyFiles in later release.
1315  needed_property_files = (
1316      AbOtaPropertyFiles(),
1317      StreamingPropertyFiles(),
1318  )
1319  FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
1320
1321
1322def main(argv):
1323
1324  def option_handler(o, a):
1325    if o in ("-k", "--package_key"):
1326      OPTIONS.package_key = a
1327    elif o in ("-i", "--incremental_from"):
1328      OPTIONS.incremental_source = a
1329    elif o == "--full_radio":
1330      OPTIONS.full_radio = True
1331    elif o == "--full_bootloader":
1332      OPTIONS.full_bootloader = True
1333    elif o == "--wipe_user_data":
1334      OPTIONS.wipe_user_data = True
1335    elif o == "--downgrade":
1336      OPTIONS.downgrade = True
1337      OPTIONS.wipe_user_data = True
1338    elif o == "--override_timestamp":
1339      OPTIONS.downgrade = True
1340    elif o in ("-o", "--oem_settings"):
1341      OPTIONS.oem_source = a.split(',')
1342    elif o == "--oem_no_mount":
1343      OPTIONS.oem_no_mount = True
1344    elif o in ("-e", "--extra_script"):
1345      OPTIONS.extra_script = a
1346    elif o in ("-t", "--worker_threads"):
1347      if a.isdigit():
1348        OPTIONS.worker_threads = int(a)
1349      else:
1350        raise ValueError("Cannot parse value %r for option %r - only "
1351                         "integers are allowed." % (a, o))
1352    elif o in ("-2", "--two_step"):
1353      OPTIONS.two_step = True
1354    elif o == "--include_secondary":
1355      OPTIONS.include_secondary = True
1356    elif o == "--no_signing":
1357      OPTIONS.no_signing = True
1358    elif o == "--verify":
1359      OPTIONS.verify = True
1360    elif o == "--block":
1361      OPTIONS.block_based = True
1362    elif o in ("-b", "--binary"):
1363      OPTIONS.updater_binary = a
1364    elif o == "--stash_threshold":
1365      try:
1366        OPTIONS.stash_threshold = float(a)
1367      except ValueError:
1368        raise ValueError("Cannot parse value %r for option %r - expecting "
1369                         "a float" % (a, o))
1370    elif o == "--log_diff":
1371      OPTIONS.log_diff = a
1372    elif o == "--payload_signer":
1373      OPTIONS.payload_signer = a
1374    elif o == "--payload_signer_args":
1375      OPTIONS.payload_signer_args = shlex.split(a)
1376    elif o == "--payload_signer_maximum_signature_size":
1377      OPTIONS.payload_signer_maximum_signature_size = a
1378    elif o == "--payload_signer_key_size":
1379      # TODO(Xunchang) remove this option after cleaning up the callers.
1380      logger.warning("The option '--payload_signer_key_size' is deprecated."
1381                     " Use '--payload_signer_maximum_signature_size' instead.")
1382      OPTIONS.payload_signer_maximum_signature_size = a
1383    elif o == "--extracted_input_target_files":
1384      OPTIONS.extracted_input = a
1385    elif o == "--skip_postinstall":
1386      OPTIONS.skip_postinstall = True
1387    elif o == "--retrofit_dynamic_partitions":
1388      OPTIONS.retrofit_dynamic_partitions = True
1389    elif o == "--skip_compatibility_check":
1390      OPTIONS.skip_compatibility_check = True
1391    elif o == "--output_metadata_path":
1392      OPTIONS.output_metadata_path = a
1393    elif o == "--disable_fec_computation":
1394      OPTIONS.disable_fec_computation = True
1395    elif o == "--disable_verity_computation":
1396      OPTIONS.disable_verity_computation = True
1397    elif o == "--force_non_ab":
1398      OPTIONS.force_non_ab = True
1399    elif o == "--boot_variable_file":
1400      OPTIONS.boot_variable_file = a
1401    elif o == "--partial":
1402      partitions = a.split()
1403      if not partitions:
1404        raise ValueError("Cannot parse partitions in {}".format(a))
1405      OPTIONS.partial = partitions
1406    elif o == "--custom_image":
1407      custom_partition, custom_image = a.split("=")
1408      OPTIONS.custom_images[custom_partition] = custom_image
1409    elif o == "--disable_vabc":
1410      OPTIONS.disable_vabc = True
1411    elif o == "--spl_downgrade":
1412      OPTIONS.spl_downgrade = True
1413      OPTIONS.wipe_user_data = True
1414    elif o == "--vabc_downgrade":
1415      OPTIONS.vabc_downgrade = True
1416    elif o == "--enable_vabc_xor":
1417      assert a.lower() in ["true", "false"]
1418      OPTIONS.enable_vabc_xor = a.lower() != "false"
1419    elif o == "--force_minor_version":
1420      OPTIONS.force_minor_version = a
1421    elif o == "--compressor_types":
1422      OPTIONS.compressor_types = a
1423    elif o == "--enable_zucchini":
1424      assert a.lower() in ["true", "false"]
1425      OPTIONS.enable_zucchini = a.lower() != "false"
1426    elif o == "--enable_lz4diff":
1427      assert a.lower() in ["true", "false"]
1428      OPTIONS.enable_lz4diff = a.lower() != "false"
1429    elif o == "--vabc_compression_param":
1430      OPTIONS.vabc_compression_param = a.lower()
1431    else:
1432      return False
1433    return True
1434
1435  args = common.ParseOptions(argv, __doc__,
1436                             extra_opts="b:k:i:d:e:t:2o:",
1437                             extra_long_opts=[
1438                                 "package_key=",
1439                                 "incremental_from=",
1440                                 "full_radio",
1441                                 "full_bootloader",
1442                                 "wipe_user_data",
1443                                 "downgrade",
1444                                 "override_timestamp",
1445                                 "extra_script=",
1446                                 "worker_threads=",
1447                                 "two_step",
1448                                 "include_secondary",
1449                                 "no_signing",
1450                                 "block",
1451                                 "binary=",
1452                                 "oem_settings=",
1453                                 "oem_no_mount",
1454                                 "verify",
1455                                 "stash_threshold=",
1456                                 "log_diff=",
1457                                 "payload_signer=",
1458                                 "payload_signer_args=",
1459                                 "payload_signer_maximum_signature_size=",
1460                                 "payload_signer_key_size=",
1461                                 "extracted_input_target_files=",
1462                                 "skip_postinstall",
1463                                 "retrofit_dynamic_partitions",
1464                                 "skip_compatibility_check",
1465                                 "output_metadata_path=",
1466                                 "disable_fec_computation",
1467                                 "disable_verity_computation",
1468                                 "force_non_ab",
1469                                 "boot_variable_file=",
1470                                 "partial=",
1471                                 "custom_image=",
1472                                 "disable_vabc",
1473                                 "spl_downgrade",
1474                                 "vabc_downgrade",
1475                                 "enable_vabc_xor=",
1476                                 "force_minor_version=",
1477                                 "compressor_types=",
1478                                 "enable_zucchini=",
1479                                 "enable_lz4diff=",
1480                                 "vabc_compression_param=",
1481                             ], extra_option_handler=option_handler)
1482
1483  if len(args) != 2:
1484    common.Usage(__doc__)
1485    sys.exit(1)
1486
1487  common.InitLogging()
1488
1489  # Load the build info dicts from the zip directly or the extracted input
1490  # directory. We don't need to unzip the entire target-files zips, because they
1491  # won't be needed for A/B OTAs (brillo_update_payload does that on its own).
1492  # When loading the info dicts, we don't need to provide the second parameter
1493  # to common.LoadInfoDict(). Specifying the second parameter allows replacing
1494  # some properties with their actual paths, such as 'selinux_fc',
1495  # 'ramdisk_dir', which won't be used during OTA generation.
1496  if OPTIONS.extracted_input is not None:
1497    OPTIONS.info_dict = common.LoadInfoDict(OPTIONS.extracted_input)
1498  else:
1499    OPTIONS.info_dict = ParseInfoDict(args[0])
1500
1501  if OPTIONS.wipe_user_data:
1502    if not OPTIONS.vabc_downgrade:
1503      logger.info("Detected downgrade/datawipe OTA."
1504                  "When wiping userdata, VABC OTA makes the user "
1505                  "wait in recovery mode for merge to finish. Disable VABC by "
1506                  "default. If you really want to do VABC downgrade, pass "
1507                  "--vabc_downgrade")
1508      OPTIONS.disable_vabc = True
1509    # We should only allow downgrading incrementals (as opposed to full).
1510    # Otherwise the device may go back from arbitrary build with this full
1511    # OTA package.
1512  if OPTIONS.incremental_source is None and OPTIONS.downgrade:
1513    raise ValueError("Cannot generate downgradable full OTAs")
1514
1515  # TODO(xunchang) for retrofit and partial updates, maybe we should rebuild the
1516  # target-file and reload the info_dict. So the info will be consistent with
1517  # the modified target-file.
1518
1519  logger.info("--- target info ---")
1520  common.DumpInfoDict(OPTIONS.info_dict)
1521
1522  # Load the source build dict if applicable.
1523  if OPTIONS.incremental_source is not None:
1524    OPTIONS.target_info_dict = OPTIONS.info_dict
1525    OPTIONS.source_info_dict = ParseInfoDict(OPTIONS.incremental_source)
1526
1527    logger.info("--- source info ---")
1528    common.DumpInfoDict(OPTIONS.source_info_dict)
1529
1530  if OPTIONS.partial:
1531    OPTIONS.info_dict['ab_partitions'] = \
1532        list(
1533        set(OPTIONS.info_dict['ab_partitions']) & set(OPTIONS.partial)
1534    )
1535    if OPTIONS.source_info_dict:
1536      OPTIONS.source_info_dict['ab_partitions'] = \
1537          list(
1538          set(OPTIONS.source_info_dict['ab_partitions']) &
1539          set(OPTIONS.partial)
1540      )
1541
1542  # Load OEM dicts if provided.
1543  OPTIONS.oem_dicts = _LoadOemDicts(OPTIONS.oem_source)
1544
1545  # Assume retrofitting dynamic partitions when base build does not set
1546  # use_dynamic_partitions but target build does.
1547  if (OPTIONS.source_info_dict and
1548      OPTIONS.source_info_dict.get("use_dynamic_partitions") != "true" and
1549          OPTIONS.target_info_dict.get("use_dynamic_partitions") == "true"):
1550    if OPTIONS.target_info_dict.get("dynamic_partition_retrofit") != "true":
1551      raise common.ExternalError(
1552          "Expect to generate incremental OTA for retrofitting dynamic "
1553          "partitions, but dynamic_partition_retrofit is not set in target "
1554          "build.")
1555    logger.info("Implicitly generating retrofit incremental OTA.")
1556    OPTIONS.retrofit_dynamic_partitions = True
1557
1558  # Skip postinstall for retrofitting dynamic partitions.
1559  if OPTIONS.retrofit_dynamic_partitions:
1560    OPTIONS.skip_postinstall = True
1561
1562  ab_update = OPTIONS.info_dict.get("ab_update") == "true"
1563  allow_non_ab = OPTIONS.info_dict.get("allow_non_ab") == "true"
1564  if OPTIONS.force_non_ab:
1565    assert allow_non_ab,\
1566        "--force_non_ab only allowed on devices that supports non-A/B"
1567    assert ab_update, "--force_non_ab only allowed on A/B devices"
1568
1569  generate_ab = not OPTIONS.force_non_ab and ab_update
1570
1571  # Use the default key to sign the package if not specified with package_key.
1572  # package_keys are needed on ab_updates, so always define them if an
1573  # A/B update is getting created.
1574  if not OPTIONS.no_signing or generate_ab:
1575    if OPTIONS.package_key is None:
1576      OPTIONS.package_key = OPTIONS.info_dict.get(
1577          "default_system_dev_certificate",
1578          "build/make/target/product/security/testkey")
1579    # Get signing keys
1580    OPTIONS.key_passwords = common.GetKeyPasswords([OPTIONS.package_key])
1581
1582    # Only check for existence of key file if using the default signer.
1583    # Because the custom signer might not need the key file AT all.
1584    # b/191704641
1585    if not OPTIONS.payload_signer:
1586      private_key_path = OPTIONS.package_key + OPTIONS.private_key_suffix
1587      if not os.path.exists(private_key_path):
1588        raise common.ExternalError(
1589            "Private key {} doesn't exist. Make sure you passed the"
1590            " correct key path through -k option".format(
1591                private_key_path)
1592        )
1593      signapk_abs_path = os.path.join(
1594          OPTIONS.search_path, OPTIONS.signapk_path)
1595      if not os.path.exists(signapk_abs_path):
1596        raise common.ExternalError(
1597            "Failed to find sign apk binary {} in search path {}. Make sure the correct search path is passed via -p".format(OPTIONS.signapk_path, OPTIONS.search_path))
1598
1599  if OPTIONS.source_info_dict:
1600    source_build_prop = OPTIONS.source_info_dict["build.prop"]
1601    target_build_prop = OPTIONS.target_info_dict["build.prop"]
1602    source_spl = source_build_prop.GetProp(SECURITY_PATCH_LEVEL_PROP_NAME)
1603    target_spl = target_build_prop.GetProp(SECURITY_PATCH_LEVEL_PROP_NAME)
1604    is_spl_downgrade = target_spl < source_spl
1605    if is_spl_downgrade and not OPTIONS.spl_downgrade and not OPTIONS.downgrade:
1606      raise common.ExternalError(
1607          "Target security patch level {} is older than source SPL {} applying "
1608          "such OTA will likely cause device fail to boot. Pass --spl_downgrade "
1609          "to override this check. This script expects security patch level to "
1610          "be in format yyyy-mm-dd (e.x. 2021-02-05). It's possible to use "
1611          "separators other than -, so as long as it's used consistenly across "
1612          "all SPL dates".format(target_spl, source_spl))
1613    elif not is_spl_downgrade and OPTIONS.spl_downgrade:
1614      raise ValueError("--spl_downgrade specified but no actual SPL downgrade"
1615                       " detected. Please only pass in this flag if you want a"
1616                       " SPL downgrade. Target SPL: {} Source SPL: {}"
1617                       .format(target_spl, source_spl))
1618  if generate_ab:
1619    GenerateAbOtaPackage(
1620        target_file=args[0],
1621        output_file=args[1],
1622        source_file=OPTIONS.incremental_source)
1623
1624  else:
1625    GenerateNonAbOtaPackage(
1626        target_file=args[0],
1627        output_file=args[1],
1628        source_file=OPTIONS.incremental_source)
1629
1630  # Post OTA generation works.
1631  if OPTIONS.incremental_source is not None and OPTIONS.log_diff:
1632    logger.info("Generating diff logs...")
1633    logger.info("Unzipping target-files for diffing...")
1634    target_dir = common.UnzipTemp(args[0], TARGET_DIFFING_UNZIP_PATTERN)
1635    source_dir = common.UnzipTemp(
1636        OPTIONS.incremental_source, TARGET_DIFFING_UNZIP_PATTERN)
1637
1638    with open(OPTIONS.log_diff, 'w') as out_file:
1639      target_files_diff.recursiveDiff(
1640          '', source_dir, target_dir, out_file)
1641
1642  logger.info("done.")
1643
1644
1645if __name__ == '__main__':
1646  try:
1647    common.CloseInheritedPipes()
1648    main(sys.argv[1:])
1649  finally:
1650    common.Cleanup()
1651