• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1#!/usr/bin/env python
2#
3# Copyright (C) 2008 The Android Open Source Project
4#
5# Licensed under the Apache License, Version 2.0 (the "License");
6# you may not use this file except in compliance with the License.
7# You may obtain a copy of the License at
8#
9#      http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS,
13# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14# See the License for the specific language governing permissions and
15# limitations under the License.
16
17"""
18Given a target-files zipfile, produces an OTA package that installs that build.
19An incremental OTA is produced if -i is given, otherwise a full OTA is produced.
20
21Usage:  ota_from_target_files [options] input_target_files output_ota_package
22
23Common options that apply to both of non-A/B and A/B OTAs
24
25  --downgrade
26      Intentionally generate an incremental OTA that updates from a newer build
27      to an older one (e.g. downgrading from P preview back to O MR1).
28      "ota-downgrade=yes" will be set in the package metadata file. A data wipe
29      will always be enforced when using this flag, so "ota-wipe=yes" will also
30      be included in the metadata file. The update-binary in the source build
31      will be used in the OTA package, unless --binary flag is specified. Please
32      also check the comment for --override_timestamp below.
33
34  -i  (--incremental_from) <file>
35      Generate an incremental OTA using the given target-files zip as the
36      starting build.
37
38  -k  (--package_key) <key>
39      Key to use to sign the package (default is the value of
40      default_system_dev_certificate from the input target-files's
41      META/misc_info.txt, or "build/make/target/product/security/testkey" if
42      that value is not specified).
43
44      For incremental OTAs, the default value is based on the source
45      target-file, not the target build.
46
47  --override_timestamp
48      Intentionally generate an incremental OTA that updates from a newer build
49      to an older one (based on timestamp comparison), by setting the downgrade
50      flag in the package metadata. This differs from --downgrade flag, as we
51      don't enforce a data wipe with this flag. Because we know for sure this is
52      NOT an actual downgrade case, but two builds happen to be cut in a reverse
53      order (e.g. from two branches). A legit use case is that we cut a new
54      build C (after having A and B), but want to enfore an update path of A ->
55      C -> B. Specifying --downgrade may not help since that would enforce a
56      data wipe for C -> B update.
57
58      We used to set a fake timestamp in the package metadata for this flow. But
59      now we consolidate the two cases (i.e. an actual downgrade, or a downgrade
60      based on timestamp) with the same "ota-downgrade=yes" flag, with the
61      difference being whether "ota-wipe=yes" is set.
62
63  --wipe_user_data
64      Generate an OTA package that will wipe the user data partition when
65      installed.
66
67  --retrofit_dynamic_partitions
68      Generates an OTA package that updates a device to support dynamic
69      partitions (default False). This flag is implied when generating
70      an incremental OTA where the base build does not support dynamic
71      partitions but the target build does. For A/B, when this flag is set,
72      --skip_postinstall is implied.
73
74  --skip_compatibility_check
75      Skip checking compatibility of the input target files package.
76
77  --output_metadata_path
78      Write a copy of the metadata to a separate file. Therefore, users can
79      read the post build fingerprint without extracting the OTA package.
80
81  --force_non_ab
82      This flag can only be set on an A/B device that also supports non-A/B
83      updates. Implies --two_step.
84      If set, generate that non-A/B update package.
85      If not set, generates A/B package for A/B device and non-A/B package for
86      non-A/B device.
87
88Non-A/B OTA specific options
89
90  -b  (--binary) <file>
91      Use the given binary as the update-binary in the output package, instead
92      of the binary in the build's target_files. Use for development only.
93
94  --block
95      Generate a block-based OTA for non-A/B device. We have deprecated the
96      support for file-based OTA since O. Block-based OTA will be used by
97      default for all non-A/B devices. Keeping this flag here to not break
98      existing callers.
99
100  -e  (--extra_script) <file>
101      Insert the contents of file at the end of the update script.
102
103  --full_bootloader
104      Similar to --full_radio. When generating an incremental OTA, always
105      include a full copy of bootloader image.
106
107  --full_radio
108      When generating an incremental OTA, always include a full copy of radio
109      image. This option is only meaningful when -i is specified, because a full
110      radio is always included in a full OTA if applicable.
111
112  --log_diff <file>
113      Generate a log file that shows the differences in the source and target
114      builds for an incremental package. This option is only meaningful when -i
115      is specified.
116
117  -o  (--oem_settings) <main_file[,additional_files...]>
118      Comma seperated list of files used to specify the expected OEM-specific
119      properties on the OEM partition of the intended device. Multiple expected
120      values can be used by providing multiple files. Only the first dict will
121      be used to compute fingerprint, while the rest will be used to assert
122      OEM-specific properties.
123
124  --oem_no_mount
125      For devices with OEM-specific properties but without an OEM partition, do
126      not mount the OEM partition in the updater-script. This should be very
127      rarely used, since it's expected to have a dedicated OEM partition for
128      OEM-specific properties. Only meaningful when -o is specified.
129
130  --stash_threshold <float>
131      Specify the threshold that will be used to compute the maximum allowed
132      stash size (defaults to 0.8).
133
134  -t  (--worker_threads) <int>
135      Specify the number of worker-threads that will be used when generating
136      patches for incremental updates (defaults to 3).
137
138  --verify
139      Verify the checksums of the updated system and vendor (if any) partitions.
140      Non-A/B incremental OTAs only.
141
142  -2  (--two_step)
143      Generate a 'two-step' OTA package, where recovery is updated first, so
144      that any changes made to the system partition are done using the new
145      recovery (new kernel, etc.).
146
147A/B OTA specific options
148
149  --disable_fec_computation
150      Disable the on device FEC data computation for incremental updates.
151
152  --include_secondary
153      Additionally include the payload for secondary slot images (default:
154      False). Only meaningful when generating A/B OTAs.
155
156      By default, an A/B OTA package doesn't contain the images for the
157      secondary slot (e.g. system_other.img). Specifying this flag allows
158      generating a separate payload that will install secondary slot images.
159
160      Such a package needs to be applied in a two-stage manner, with a reboot
161      in-between. During the first stage, the updater applies the primary
162      payload only. Upon finishing, it reboots the device into the newly updated
163      slot. It then continues to install the secondary payload to the inactive
164      slot, but without switching the active slot at the end (needs the matching
165      support in update_engine, i.e. SWITCH_SLOT_ON_REBOOT flag).
166
167      Due to the special install procedure, the secondary payload will be always
168      generated as a full payload.
169
170  --payload_signer <signer>
171      Specify the signer when signing the payload and metadata for A/B OTAs.
172      By default (i.e. without this flag), it calls 'openssl pkeyutl' to sign
173      with the package private key. If the private key cannot be accessed
174      directly, a payload signer that knows how to do that should be specified.
175      The signer will be supplied with "-inkey <path_to_key>",
176      "-in <input_file>" and "-out <output_file>" parameters.
177
178  --payload_signer_args <args>
179      Specify the arguments needed for payload signer.
180
181  --payload_signer_maximum_signature_size <signature_size>
182      The maximum signature size (in bytes) that would be generated by the given
183      payload signer. Only meaningful when custom payload signer is specified
184      via '--payload_signer'.
185      If the signer uses a RSA key, this should be the number of bytes to
186      represent the modulus. If it uses an EC key, this is the size of a
187      DER-encoded ECDSA signature.
188
189  --payload_signer_key_size <key_size>
190      Deprecated. Use the '--payload_signer_maximum_signature_size' instead.
191
192  --boot_variable_file <path>
193      A file that contains the possible values of ro.boot.* properties. It's
194      used to calculate the possible runtime fingerprints when some
195      ro.product.* properties are overridden by the 'import' statement.
196      The file expects one property per line, and each line has the following
197      format: 'prop_name=value1,value2'. e.g. 'ro.boot.product.sku=std,pro'
198
199  --skip_postinstall
200      Skip the postinstall hooks when generating an A/B OTA package (default:
201      False). Note that this discards ALL the hooks, including non-optional
202      ones. Should only be used if caller knows it's safe to do so (e.g. all the
203      postinstall work is to dexopt apps and a data wipe will happen immediately
204      after). Only meaningful when generating A/B OTAs.
205"""
206
207from __future__ import print_function
208
209import collections
210import copy
211import itertools
212import logging
213import multiprocessing
214import os.path
215import shlex
216import shutil
217import struct
218import sys
219import zipfile
220
221import check_target_files_vintf
222import common
223import edify_generator
224import verity_utils
225
226if sys.hexversion < 0x02070000:
227  print("Python 2.7 or newer is required.", file=sys.stderr)
228  sys.exit(1)
229
230logger = logging.getLogger(__name__)
231
232OPTIONS = common.OPTIONS
233OPTIONS.package_key = None
234OPTIONS.incremental_source = None
235OPTIONS.verify = False
236OPTIONS.patch_threshold = 0.95
237OPTIONS.wipe_user_data = False
238OPTIONS.downgrade = False
239OPTIONS.extra_script = None
240OPTIONS.worker_threads = multiprocessing.cpu_count() // 2
241if OPTIONS.worker_threads == 0:
242  OPTIONS.worker_threads = 1
243OPTIONS.two_step = False
244OPTIONS.include_secondary = False
245OPTIONS.no_signing = False
246OPTIONS.block_based = True
247OPTIONS.updater_binary = None
248OPTIONS.oem_dicts = None
249OPTIONS.oem_source = None
250OPTIONS.oem_no_mount = False
251OPTIONS.full_radio = False
252OPTIONS.full_bootloader = False
253# Stash size cannot exceed cache_size * threshold.
254OPTIONS.cache_size = None
255OPTIONS.stash_threshold = 0.8
256OPTIONS.log_diff = None
257OPTIONS.payload_signer = None
258OPTIONS.payload_signer_args = []
259OPTIONS.payload_signer_maximum_signature_size = None
260OPTIONS.extracted_input = None
261OPTIONS.key_passwords = []
262OPTIONS.skip_postinstall = False
263OPTIONS.retrofit_dynamic_partitions = False
264OPTIONS.skip_compatibility_check = False
265OPTIONS.output_metadata_path = None
266OPTIONS.disable_fec_computation = False
267OPTIONS.force_non_ab = False
268OPTIONS.boot_variable_file = None
269
270
271METADATA_NAME = 'META-INF/com/android/metadata'
272POSTINSTALL_CONFIG = 'META/postinstall_config.txt'
273DYNAMIC_PARTITION_INFO = 'META/dynamic_partitions_info.txt'
274AB_PARTITIONS = 'META/ab_partitions.txt'
275UNZIP_PATTERN = ['IMAGES/*', 'META/*', 'OTA/*', 'RADIO/*']
276# Files to be unzipped for target diffing purpose.
277TARGET_DIFFING_UNZIP_PATTERN = ['BOOT', 'RECOVERY', 'SYSTEM/*', 'VENDOR/*',
278                                'PRODUCT/*', 'SYSTEM_EXT/*', 'ODM/*']
279RETROFIT_DAP_UNZIP_PATTERN = ['OTA/super_*.img', AB_PARTITIONS]
280
281# Images to be excluded from secondary payload. We essentially only keep
282# 'system_other' and bootloader partitions.
283SECONDARY_PAYLOAD_SKIPPED_IMAGES = [
284    'boot', 'dtbo', 'modem', 'odm', 'product', 'radio', 'recovery',
285    'system_ext', 'vbmeta', 'vbmeta_system', 'vbmeta_vendor', 'vendor',
286    'vendor_boot']
287
288
289class PayloadSigner(object):
290  """A class that wraps the payload signing works.
291
292  When generating a Payload, hashes of the payload and metadata files will be
293  signed with the device key, either by calling an external payload signer or
294  by calling openssl with the package key. This class provides a unified
295  interface, so that callers can just call PayloadSigner.Sign().
296
297  If an external payload signer has been specified (OPTIONS.payload_signer), it
298  calls the signer with the provided args (OPTIONS.payload_signer_args). Note
299  that the signing key should be provided as part of the payload_signer_args.
300  Otherwise without an external signer, it uses the package key
301  (OPTIONS.package_key) and calls openssl for the signing works.
302  """
303
304  def __init__(self):
305    if OPTIONS.payload_signer is None:
306      # Prepare the payload signing key.
307      private_key = OPTIONS.package_key + OPTIONS.private_key_suffix
308      pw = OPTIONS.key_passwords[OPTIONS.package_key]
309
310      cmd = ["openssl", "pkcs8", "-in", private_key, "-inform", "DER"]
311      cmd.extend(["-passin", "pass:" + pw] if pw else ["-nocrypt"])
312      signing_key = common.MakeTempFile(prefix="key-", suffix=".key")
313      cmd.extend(["-out", signing_key])
314      common.RunAndCheckOutput(cmd, verbose=False)
315
316      self.signer = "openssl"
317      self.signer_args = ["pkeyutl", "-sign", "-inkey", signing_key,
318                          "-pkeyopt", "digest:sha256"]
319      self.maximum_signature_size = self._GetMaximumSignatureSizeInBytes(
320          signing_key)
321    else:
322      self.signer = OPTIONS.payload_signer
323      self.signer_args = OPTIONS.payload_signer_args
324      if OPTIONS.payload_signer_maximum_signature_size:
325        self.maximum_signature_size = int(
326            OPTIONS.payload_signer_maximum_signature_size)
327      else:
328        # The legacy config uses RSA2048 keys.
329        logger.warning("The maximum signature size for payload signer is not"
330                       " set, default to 256 bytes.")
331        self.maximum_signature_size = 256
332
333  @staticmethod
334  def _GetMaximumSignatureSizeInBytes(signing_key):
335    out_signature_size_file = common.MakeTempFile("signature_size")
336    cmd = ["delta_generator", "--out_maximum_signature_size_file={}".format(
337        out_signature_size_file), "--private_key={}".format(signing_key)]
338    common.RunAndCheckOutput(cmd)
339    with open(out_signature_size_file) as f:
340      signature_size = f.read().rstrip()
341    logger.info("%s outputs the maximum signature size: %s", cmd[0],
342                signature_size)
343    return int(signature_size)
344
345  def Sign(self, in_file):
346    """Signs the given input file. Returns the output filename."""
347    out_file = common.MakeTempFile(prefix="signed-", suffix=".bin")
348    cmd = [self.signer] + self.signer_args + ['-in', in_file, '-out', out_file]
349    common.RunAndCheckOutput(cmd)
350    return out_file
351
352
353class Payload(object):
354  """Manages the creation and the signing of an A/B OTA Payload."""
355
356  PAYLOAD_BIN = 'payload.bin'
357  PAYLOAD_PROPERTIES_TXT = 'payload_properties.txt'
358  SECONDARY_PAYLOAD_BIN = 'secondary/payload.bin'
359  SECONDARY_PAYLOAD_PROPERTIES_TXT = 'secondary/payload_properties.txt'
360
361  def __init__(self, secondary=False):
362    """Initializes a Payload instance.
363
364    Args:
365      secondary: Whether it's generating a secondary payload (default: False).
366    """
367    self.payload_file = None
368    self.payload_properties = None
369    self.secondary = secondary
370
371  def _Run(self, cmd):  # pylint: disable=no-self-use
372    # Don't pipe (buffer) the output if verbose is set. Let
373    # brillo_update_payload write to stdout/stderr directly, so its progress can
374    # be monitored.
375    if OPTIONS.verbose:
376      common.RunAndCheckOutput(cmd, stdout=None, stderr=None)
377    else:
378      common.RunAndCheckOutput(cmd)
379
380  def Generate(self, target_file, source_file=None, additional_args=None):
381    """Generates a payload from the given target-files zip(s).
382
383    Args:
384      target_file: The filename of the target build target-files zip.
385      source_file: The filename of the source build target-files zip; or None if
386          generating a full OTA.
387      additional_args: A list of additional args that should be passed to
388          brillo_update_payload script; or None.
389    """
390    if additional_args is None:
391      additional_args = []
392
393    payload_file = common.MakeTempFile(prefix="payload-", suffix=".bin")
394    cmd = ["brillo_update_payload", "generate",
395           "--payload", payload_file,
396           "--target_image", target_file]
397    if source_file is not None:
398      cmd.extend(["--source_image", source_file])
399      if OPTIONS.disable_fec_computation:
400        cmd.extend(["--disable_fec_computation", "true"])
401    cmd.extend(additional_args)
402    self._Run(cmd)
403
404    self.payload_file = payload_file
405    self.payload_properties = None
406
407  def Sign(self, payload_signer):
408    """Generates and signs the hashes of the payload and metadata.
409
410    Args:
411      payload_signer: A PayloadSigner() instance that serves the signing work.
412
413    Raises:
414      AssertionError: On any failure when calling brillo_update_payload script.
415    """
416    assert isinstance(payload_signer, PayloadSigner)
417
418    # 1. Generate hashes of the payload and metadata files.
419    payload_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
420    metadata_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
421    cmd = ["brillo_update_payload", "hash",
422           "--unsigned_payload", self.payload_file,
423           "--signature_size", str(payload_signer.maximum_signature_size),
424           "--metadata_hash_file", metadata_sig_file,
425           "--payload_hash_file", payload_sig_file]
426    self._Run(cmd)
427
428    # 2. Sign the hashes.
429    signed_payload_sig_file = payload_signer.Sign(payload_sig_file)
430    signed_metadata_sig_file = payload_signer.Sign(metadata_sig_file)
431
432    # 3. Insert the signatures back into the payload file.
433    signed_payload_file = common.MakeTempFile(prefix="signed-payload-",
434                                              suffix=".bin")
435    cmd = ["brillo_update_payload", "sign",
436           "--unsigned_payload", self.payload_file,
437           "--payload", signed_payload_file,
438           "--signature_size", str(payload_signer.maximum_signature_size),
439           "--metadata_signature_file", signed_metadata_sig_file,
440           "--payload_signature_file", signed_payload_sig_file]
441    self._Run(cmd)
442
443    # 4. Dump the signed payload properties.
444    properties_file = common.MakeTempFile(prefix="payload-properties-",
445                                          suffix=".txt")
446    cmd = ["brillo_update_payload", "properties",
447           "--payload", signed_payload_file,
448           "--properties_file", properties_file]
449    self._Run(cmd)
450
451    if self.secondary:
452      with open(properties_file, "a") as f:
453        f.write("SWITCH_SLOT_ON_REBOOT=0\n")
454
455    if OPTIONS.wipe_user_data:
456      with open(properties_file, "a") as f:
457        f.write("POWERWASH=1\n")
458
459    self.payload_file = signed_payload_file
460    self.payload_properties = properties_file
461
462  def WriteToZip(self, output_zip):
463    """Writes the payload to the given zip.
464
465    Args:
466      output_zip: The output ZipFile instance.
467    """
468    assert self.payload_file is not None
469    assert self.payload_properties is not None
470
471    if self.secondary:
472      payload_arcname = Payload.SECONDARY_PAYLOAD_BIN
473      payload_properties_arcname = Payload.SECONDARY_PAYLOAD_PROPERTIES_TXT
474    else:
475      payload_arcname = Payload.PAYLOAD_BIN
476      payload_properties_arcname = Payload.PAYLOAD_PROPERTIES_TXT
477
478    # Add the signed payload file and properties into the zip. In order to
479    # support streaming, we pack them as ZIP_STORED. So these entries can be
480    # read directly with the offset and length pairs.
481    common.ZipWrite(output_zip, self.payload_file, arcname=payload_arcname,
482                    compress_type=zipfile.ZIP_STORED)
483    common.ZipWrite(output_zip, self.payload_properties,
484                    arcname=payload_properties_arcname,
485                    compress_type=zipfile.ZIP_STORED)
486
487
488def SignOutput(temp_zip_name, output_zip_name):
489  pw = OPTIONS.key_passwords[OPTIONS.package_key]
490
491  common.SignFile(temp_zip_name, output_zip_name, OPTIONS.package_key, pw,
492                  whole_file=True)
493
494
495def _LoadOemDicts(oem_source):
496  """Returns the list of loaded OEM properties dict."""
497  if not oem_source:
498    return None
499
500  oem_dicts = []
501  for oem_file in oem_source:
502    with open(oem_file) as fp:
503      oem_dicts.append(common.LoadDictionaryFromLines(fp.readlines()))
504  return oem_dicts
505
506
507def _WriteRecoveryImageToBoot(script, output_zip):
508  """Find and write recovery image to /boot in two-step OTA.
509
510  In two-step OTAs, we write recovery image to /boot as the first step so that
511  we can reboot to there and install a new recovery image to /recovery.
512  A special "recovery-two-step.img" will be preferred, which encodes the correct
513  path of "/boot". Otherwise the device may show "device is corrupt" message
514  when booting into /boot.
515
516  Fall back to using the regular recovery.img if the two-step recovery image
517  doesn't exist. Note that rebuilding the special image at this point may be
518  infeasible, because we don't have the desired boot signer and keys when
519  calling ota_from_target_files.py.
520  """
521
522  recovery_two_step_img_name = "recovery-two-step.img"
523  recovery_two_step_img_path = os.path.join(
524      OPTIONS.input_tmp, "OTA", recovery_two_step_img_name)
525  if os.path.exists(recovery_two_step_img_path):
526    common.ZipWrite(
527        output_zip,
528        recovery_two_step_img_path,
529        arcname=recovery_two_step_img_name)
530    logger.info(
531        "two-step package: using %s in stage 1/3", recovery_two_step_img_name)
532    script.WriteRawImage("/boot", recovery_two_step_img_name)
533  else:
534    logger.info("two-step package: using recovery.img in stage 1/3")
535    # The "recovery.img" entry has been written into package earlier.
536    script.WriteRawImage("/boot", "recovery.img")
537
538
539def HasRecoveryPatch(target_files_zip, info_dict):
540  board_uses_vendorimage = info_dict.get("board_uses_vendorimage") == "true"
541
542  if board_uses_vendorimage:
543    target_files_dir = "VENDOR"
544  else:
545    target_files_dir = "SYSTEM/vendor"
546
547  patch = "%s/recovery-from-boot.p" % target_files_dir
548  img = "%s/etc/recovery.img" %target_files_dir
549
550  namelist = [name for name in target_files_zip.namelist()]
551  return (patch in namelist or img in namelist)
552
553
554def HasPartition(target_files_zip, partition):
555  try:
556    target_files_zip.getinfo(partition.upper() + "/")
557    return True
558  except KeyError:
559    return False
560
561
562def HasTrebleEnabled(target_files, target_info):
563  def HasVendorPartition(target_files):
564    if os.path.isdir(target_files):
565      return os.path.isdir(os.path.join(target_files, "VENDOR"))
566    if zipfile.is_zipfile(target_files):
567      return HasPartition(zipfile.ZipFile(target_files), "vendor")
568    raise ValueError("Unknown target_files argument")
569
570  return (HasVendorPartition(target_files) and
571          target_info.GetBuildProp("ro.treble.enabled") == "true")
572
573
574def WriteFingerprintAssertion(script, target_info, source_info):
575  source_oem_props = source_info.oem_props
576  target_oem_props = target_info.oem_props
577
578  if source_oem_props is None and target_oem_props is None:
579    script.AssertSomeFingerprint(
580        source_info.fingerprint, target_info.fingerprint)
581  elif source_oem_props is not None and target_oem_props is not None:
582    script.AssertSomeThumbprint(
583        target_info.GetBuildProp("ro.build.thumbprint"),
584        source_info.GetBuildProp("ro.build.thumbprint"))
585  elif source_oem_props is None and target_oem_props is not None:
586    script.AssertFingerprintOrThumbprint(
587        source_info.fingerprint,
588        target_info.GetBuildProp("ro.build.thumbprint"))
589  else:
590    script.AssertFingerprintOrThumbprint(
591        target_info.fingerprint,
592        source_info.GetBuildProp("ro.build.thumbprint"))
593
594
595def CheckVintfIfTrebleEnabled(target_files, target_info):
596  """Checks compatibility info of the input target files.
597
598  Metadata used for compatibility verification is retrieved from target_zip.
599
600  Compatibility should only be checked for devices that have enabled
601  Treble support.
602
603  Args:
604    target_files: Path to zip file containing the source files to be included
605        for OTA. Can also be the path to extracted directory.
606    target_info: The BuildInfo instance that holds the target build info.
607  """
608
609  # Will only proceed if the target has enabled the Treble support (as well as
610  # having a /vendor partition).
611  if not HasTrebleEnabled(target_files, target_info):
612    return
613
614  # Skip adding the compatibility package as a workaround for b/114240221. The
615  # compatibility will always fail on devices without qualified kernels.
616  if OPTIONS.skip_compatibility_check:
617    return
618
619  if not check_target_files_vintf.CheckVintf(target_files, target_info):
620    raise RuntimeError("VINTF compatibility check failed")
621
622
623def GetBlockDifferences(target_zip, source_zip, target_info, source_info,
624                        device_specific):
625  """Returns a ordered dict of block differences with partition name as key."""
626
627  def GetIncrementalBlockDifferenceForPartition(name):
628    if not HasPartition(source_zip, name):
629      raise RuntimeError("can't generate incremental that adds {}".format(name))
630
631    partition_src = common.GetUserImage(name, OPTIONS.source_tmp, source_zip,
632                                        info_dict=source_info,
633                                        allow_shared_blocks=allow_shared_blocks)
634
635    hashtree_info_generator = verity_utils.CreateHashtreeInfoGenerator(
636        name, 4096, target_info)
637    partition_tgt = common.GetUserImage(name, OPTIONS.target_tmp, target_zip,
638                                        info_dict=target_info,
639                                        allow_shared_blocks=allow_shared_blocks,
640                                        hashtree_info_generator=
641                                        hashtree_info_generator)
642
643    # Check the first block of the source system partition for remount R/W only
644    # if the filesystem is ext4.
645    partition_source_info = source_info["fstab"]["/" + name]
646    check_first_block = partition_source_info.fs_type == "ext4"
647    # Disable using imgdiff for squashfs. 'imgdiff -z' expects input files to be
648    # in zip formats. However with squashfs, a) all files are compressed in LZ4;
649    # b) the blocks listed in block map may not contain all the bytes for a
650    # given file (because they're rounded to be 4K-aligned).
651    partition_target_info = target_info["fstab"]["/" + name]
652    disable_imgdiff = (partition_source_info.fs_type == "squashfs" or
653                       partition_target_info.fs_type == "squashfs")
654    return common.BlockDifference(name, partition_tgt, partition_src,
655                                  check_first_block,
656                                  version=blockimgdiff_version,
657                                  disable_imgdiff=disable_imgdiff)
658
659  if source_zip:
660    # See notes in common.GetUserImage()
661    allow_shared_blocks = (source_info.get('ext4_share_dup_blocks') == "true" or
662                           target_info.get('ext4_share_dup_blocks') == "true")
663    blockimgdiff_version = max(
664        int(i) for i in target_info.get(
665            "blockimgdiff_versions", "1").split(","))
666    assert blockimgdiff_version >= 3
667
668  block_diff_dict = collections.OrderedDict()
669  partition_names = ["system", "vendor", "product", "odm", "system_ext"]
670  for partition in partition_names:
671    if not HasPartition(target_zip, partition):
672      continue
673    # Full OTA update.
674    if not source_zip:
675      tgt = common.GetUserImage(partition, OPTIONS.input_tmp, target_zip,
676                                info_dict=target_info,
677                                reset_file_map=True)
678      block_diff_dict[partition] = common.BlockDifference(partition, tgt,
679                                                          src=None)
680    # Incremental OTA update.
681    else:
682      block_diff_dict[partition] = GetIncrementalBlockDifferenceForPartition(
683          partition)
684  assert "system" in block_diff_dict
685
686  # Get the block diffs from the device specific script. If there is a
687  # duplicate block diff for a partition, ignore the diff in the generic script
688  # and use the one in the device specific script instead.
689  if source_zip:
690    device_specific_diffs = device_specific.IncrementalOTA_GetBlockDifferences()
691    function_name = "IncrementalOTA_GetBlockDifferences"
692  else:
693    device_specific_diffs = device_specific.FullOTA_GetBlockDifferences()
694    function_name = "FullOTA_GetBlockDifferences"
695
696  if device_specific_diffs:
697    assert all(isinstance(diff, common.BlockDifference)
698               for diff in device_specific_diffs), \
699        "{} is not returning a list of BlockDifference objects".format(
700            function_name)
701    for diff in device_specific_diffs:
702      if diff.partition in block_diff_dict:
703        logger.warning("Duplicate block difference found. Device specific block"
704                       " diff for partition '%s' overrides the one in generic"
705                       " script.", diff.partition)
706      block_diff_dict[diff.partition] = diff
707
708  return block_diff_dict
709
710
711def WriteFullOTAPackage(input_zip, output_file):
712  target_info = common.BuildInfo(OPTIONS.info_dict, OPTIONS.oem_dicts)
713
714  # We don't know what version it will be installed on top of. We expect the API
715  # just won't change very often. Similarly for fstab, it might have changed in
716  # the target build.
717  target_api_version = target_info["recovery_api_version"]
718  script = edify_generator.EdifyGenerator(target_api_version, target_info)
719
720  if target_info.oem_props and not OPTIONS.oem_no_mount:
721    target_info.WriteMountOemScript(script)
722
723  metadata = GetPackageMetadata(target_info)
724
725  if not OPTIONS.no_signing:
726    staging_file = common.MakeTempFile(suffix='.zip')
727  else:
728    staging_file = output_file
729
730  output_zip = zipfile.ZipFile(
731      staging_file, "w", compression=zipfile.ZIP_DEFLATED)
732
733  device_specific = common.DeviceSpecificParams(
734      input_zip=input_zip,
735      input_version=target_api_version,
736      output_zip=output_zip,
737      script=script,
738      input_tmp=OPTIONS.input_tmp,
739      metadata=metadata,
740      info_dict=OPTIONS.info_dict)
741
742  assert HasRecoveryPatch(input_zip, info_dict=OPTIONS.info_dict)
743
744  # Assertions (e.g. downgrade check, device properties check).
745  ts = target_info.GetBuildProp("ro.build.date.utc")
746  ts_text = target_info.GetBuildProp("ro.build.date")
747  script.AssertOlderBuild(ts, ts_text)
748
749  target_info.WriteDeviceAssertions(script, OPTIONS.oem_no_mount)
750  device_specific.FullOTA_Assertions()
751
752  block_diff_dict = GetBlockDifferences(target_zip=input_zip, source_zip=None,
753                                        target_info=target_info,
754                                        source_info=None,
755                                        device_specific=device_specific)
756
757  # Two-step package strategy (in chronological order, which is *not*
758  # the order in which the generated script has things):
759  #
760  # if stage is not "2/3" or "3/3":
761  #    write recovery image to boot partition
762  #    set stage to "2/3"
763  #    reboot to boot partition and restart recovery
764  # else if stage is "2/3":
765  #    write recovery image to recovery partition
766  #    set stage to "3/3"
767  #    reboot to recovery partition and restart recovery
768  # else:
769  #    (stage must be "3/3")
770  #    set stage to ""
771  #    do normal full package installation:
772  #       wipe and install system, boot image, etc.
773  #       set up system to update recovery partition on first boot
774  #    complete script normally
775  #    (allow recovery to mark itself finished and reboot)
776
777  recovery_img = common.GetBootableImage("recovery.img", "recovery.img",
778                                         OPTIONS.input_tmp, "RECOVERY")
779  if OPTIONS.two_step:
780    if not target_info.get("multistage_support"):
781      assert False, "two-step packages not supported by this build"
782    fs = target_info["fstab"]["/misc"]
783    assert fs.fs_type.upper() == "EMMC", \
784        "two-step packages only supported on devices with EMMC /misc partitions"
785    bcb_dev = {"bcb_dev": fs.device}
786    common.ZipWriteStr(output_zip, "recovery.img", recovery_img.data)
787    script.AppendExtra("""
788if get_stage("%(bcb_dev)s") == "2/3" then
789""" % bcb_dev)
790
791    # Stage 2/3: Write recovery image to /recovery (currently running /boot).
792    script.Comment("Stage 2/3")
793    script.WriteRawImage("/recovery", "recovery.img")
794    script.AppendExtra("""
795set_stage("%(bcb_dev)s", "3/3");
796reboot_now("%(bcb_dev)s", "recovery");
797else if get_stage("%(bcb_dev)s") == "3/3" then
798""" % bcb_dev)
799
800    # Stage 3/3: Make changes.
801    script.Comment("Stage 3/3")
802
803  # Dump fingerprints
804  script.Print("Target: {}".format(target_info.fingerprint))
805
806  device_specific.FullOTA_InstallBegin()
807
808  # All other partitions as well as the data wipe use 10% of the progress, and
809  # the update of the system partition takes the remaining progress.
810  system_progress = 0.9 - (len(block_diff_dict) - 1) * 0.1
811  if OPTIONS.wipe_user_data:
812    system_progress -= 0.1
813  progress_dict = {partition: 0.1 for partition in block_diff_dict}
814  progress_dict["system"] = system_progress
815
816  if target_info.get('use_dynamic_partitions') == "true":
817    # Use empty source_info_dict to indicate that all partitions / groups must
818    # be re-added.
819    dynamic_partitions_diff = common.DynamicPartitionsDifference(
820        info_dict=OPTIONS.info_dict,
821        block_diffs=block_diff_dict.values(),
822        progress_dict=progress_dict)
823    dynamic_partitions_diff.WriteScript(script, output_zip,
824                                        write_verify_script=OPTIONS.verify)
825  else:
826    for block_diff in block_diff_dict.values():
827      block_diff.WriteScript(script, output_zip,
828                             progress=progress_dict.get(block_diff.partition),
829                             write_verify_script=OPTIONS.verify)
830
831  CheckVintfIfTrebleEnabled(OPTIONS.input_tmp, target_info)
832
833  boot_img = common.GetBootableImage(
834      "boot.img", "boot.img", OPTIONS.input_tmp, "BOOT")
835  common.CheckSize(boot_img.data, "boot.img", target_info)
836  common.ZipWriteStr(output_zip, "boot.img", boot_img.data)
837
838  script.WriteRawImage("/boot", "boot.img")
839
840  script.ShowProgress(0.1, 10)
841  device_specific.FullOTA_InstallEnd()
842
843  if OPTIONS.extra_script is not None:
844    script.AppendExtra(OPTIONS.extra_script)
845
846  script.UnmountAll()
847
848  if OPTIONS.wipe_user_data:
849    script.ShowProgress(0.1, 10)
850    script.FormatPartition("/data")
851
852  if OPTIONS.two_step:
853    script.AppendExtra("""
854set_stage("%(bcb_dev)s", "");
855""" % bcb_dev)
856    script.AppendExtra("else\n")
857
858    # Stage 1/3: Nothing to verify for full OTA. Write recovery image to /boot.
859    script.Comment("Stage 1/3")
860    _WriteRecoveryImageToBoot(script, output_zip)
861
862    script.AppendExtra("""
863set_stage("%(bcb_dev)s", "2/3");
864reboot_now("%(bcb_dev)s", "");
865endif;
866endif;
867""" % bcb_dev)
868
869  script.SetProgress(1)
870  script.AddToZip(input_zip, output_zip, input_path=OPTIONS.updater_binary)
871  metadata["ota-required-cache"] = str(script.required_cache)
872
873  # We haven't written the metadata entry, which will be done in
874  # FinalizeMetadata.
875  common.ZipClose(output_zip)
876
877  needed_property_files = (
878      NonAbOtaPropertyFiles(),
879  )
880  FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
881
882
883def WriteMetadata(metadata, output):
884  """Writes the metadata to the zip archive or a file.
885
886  Args:
887    metadata: The metadata dict for the package.
888    output: A ZipFile object or a string of the output file path.
889  """
890
891  value = "".join(["%s=%s\n" % kv for kv in sorted(metadata.items())])
892  if isinstance(output, zipfile.ZipFile):
893    common.ZipWriteStr(output, METADATA_NAME, value,
894                       compress_type=zipfile.ZIP_STORED)
895    return
896
897  with open(output, 'w') as f:
898    f.write(value)
899
900
901def HandleDowngradeMetadata(metadata, target_info, source_info):
902  # Only incremental OTAs are allowed to reach here.
903  assert OPTIONS.incremental_source is not None
904
905  post_timestamp = target_info.GetBuildProp("ro.build.date.utc")
906  pre_timestamp = source_info.GetBuildProp("ro.build.date.utc")
907  is_downgrade = int(post_timestamp) < int(pre_timestamp)
908
909  if OPTIONS.downgrade:
910    if not is_downgrade:
911      raise RuntimeError(
912          "--downgrade or --override_timestamp specified but no downgrade "
913          "detected: pre: %s, post: %s" % (pre_timestamp, post_timestamp))
914    metadata["ota-downgrade"] = "yes"
915  else:
916    if is_downgrade:
917      raise RuntimeError(
918          "Downgrade detected based on timestamp check: pre: %s, post: %s. "
919          "Need to specify --override_timestamp OR --downgrade to allow "
920          "building the incremental." % (pre_timestamp, post_timestamp))
921
922
923def GetPackageMetadata(target_info, source_info=None):
924  """Generates and returns the metadata dict.
925
926  It generates a dict() that contains the info to be written into an OTA
927  package (META-INF/com/android/metadata). It also handles the detection of
928  downgrade / data wipe based on the global options.
929
930  Args:
931    target_info: The BuildInfo instance that holds the target build info.
932    source_info: The BuildInfo instance that holds the source build info, or
933        None if generating full OTA.
934
935  Returns:
936    A dict to be written into package metadata entry.
937  """
938  assert isinstance(target_info, common.BuildInfo)
939  assert source_info is None or isinstance(source_info, common.BuildInfo)
940
941  separator = '|'
942
943  boot_variable_values = {}
944  if OPTIONS.boot_variable_file:
945    d = common.LoadDictionaryFromFile(OPTIONS.boot_variable_file)
946    for key, values in d.items():
947      boot_variable_values[key] = [val.strip() for val in values.split(',')]
948
949  post_build_devices, post_build_fingerprints = \
950      CalculateRuntimeDevicesAndFingerprints(target_info, boot_variable_values)
951  metadata = {
952      'post-build': separator.join(sorted(post_build_fingerprints)),
953      'post-build-incremental': target_info.GetBuildProp(
954          'ro.build.version.incremental'),
955      'post-sdk-level': target_info.GetBuildProp(
956          'ro.build.version.sdk'),
957      'post-security-patch-level': target_info.GetBuildProp(
958          'ro.build.version.security_patch'),
959  }
960
961  if target_info.is_ab and not OPTIONS.force_non_ab:
962    metadata['ota-type'] = 'AB'
963    metadata['ota-required-cache'] = '0'
964  else:
965    metadata['ota-type'] = 'BLOCK'
966
967  if OPTIONS.wipe_user_data:
968    metadata['ota-wipe'] = 'yes'
969
970  if OPTIONS.retrofit_dynamic_partitions:
971    metadata['ota-retrofit-dynamic-partitions'] = 'yes'
972
973  is_incremental = source_info is not None
974  if is_incremental:
975    pre_build_devices, pre_build_fingerprints = \
976        CalculateRuntimeDevicesAndFingerprints(source_info,
977                                               boot_variable_values)
978    metadata['pre-build'] = separator.join(sorted(pre_build_fingerprints))
979    metadata['pre-build-incremental'] = source_info.GetBuildProp(
980        'ro.build.version.incremental')
981    metadata['pre-device'] = separator.join(sorted(pre_build_devices))
982  else:
983    metadata['pre-device'] = separator.join(sorted(post_build_devices))
984
985  # Use the actual post-timestamp, even for a downgrade case.
986  metadata['post-timestamp'] = target_info.GetBuildProp('ro.build.date.utc')
987
988  # Detect downgrades and set up downgrade flags accordingly.
989  if is_incremental:
990    HandleDowngradeMetadata(metadata, target_info, source_info)
991
992  return metadata
993
994
995class PropertyFiles(object):
996  """A class that computes the property-files string for an OTA package.
997
998  A property-files string is a comma-separated string that contains the
999  offset/size info for an OTA package. The entries, which must be ZIP_STORED,
1000  can be fetched directly with the package URL along with the offset/size info.
1001  These strings can be used for streaming A/B OTAs, or allowing an updater to
1002  download package metadata entry directly, without paying the cost of
1003  downloading entire package.
1004
1005  Computing the final property-files string requires two passes. Because doing
1006  the whole package signing (with signapk.jar) will possibly reorder the ZIP
1007  entries, which may in turn invalidate earlier computed ZIP entry offset/size
1008  values.
1009
1010  This class provides functions to be called for each pass. The general flow is
1011  as follows.
1012
1013    property_files = PropertyFiles()
1014    # The first pass, which writes placeholders before doing initial signing.
1015    property_files.Compute()
1016    SignOutput()
1017
1018    # The second pass, by replacing the placeholders with actual data.
1019    property_files.Finalize()
1020    SignOutput()
1021
1022  And the caller can additionally verify the final result.
1023
1024    property_files.Verify()
1025  """
1026
1027  def __init__(self):
1028    self.name = None
1029    self.required = ()
1030    self.optional = ()
1031
1032  def Compute(self, input_zip):
1033    """Computes and returns a property-files string with placeholders.
1034
1035    We reserve extra space for the offset and size of the metadata entry itself,
1036    although we don't know the final values until the package gets signed.
1037
1038    Args:
1039      input_zip: The input ZIP file.
1040
1041    Returns:
1042      A string with placeholders for the metadata offset/size info, e.g.
1043      "payload.bin:679:343,payload_properties.txt:378:45,metadata:        ".
1044    """
1045    return self.GetPropertyFilesString(input_zip, reserve_space=True)
1046
1047  class InsufficientSpaceException(Exception):
1048    pass
1049
1050  def Finalize(self, input_zip, reserved_length):
1051    """Finalizes a property-files string with actual METADATA offset/size info.
1052
1053    The input ZIP file has been signed, with the ZIP entries in the desired
1054    place (signapk.jar will possibly reorder the ZIP entries). Now we compute
1055    the ZIP entry offsets and construct the property-files string with actual
1056    data. Note that during this process, we must pad the property-files string
1057    to the reserved length, so that the METADATA entry size remains the same.
1058    Otherwise the entries' offsets and sizes may change again.
1059
1060    Args:
1061      input_zip: The input ZIP file.
1062      reserved_length: The reserved length of the property-files string during
1063          the call to Compute(). The final string must be no more than this
1064          size.
1065
1066    Returns:
1067      A property-files string including the metadata offset/size info, e.g.
1068      "payload.bin:679:343,payload_properties.txt:378:45,metadata:69:379  ".
1069
1070    Raises:
1071      InsufficientSpaceException: If the reserved length is insufficient to hold
1072          the final string.
1073    """
1074    result = self.GetPropertyFilesString(input_zip, reserve_space=False)
1075    if len(result) > reserved_length:
1076      raise self.InsufficientSpaceException(
1077          'Insufficient reserved space: reserved={}, actual={}'.format(
1078              reserved_length, len(result)))
1079
1080    result += ' ' * (reserved_length - len(result))
1081    return result
1082
1083  def Verify(self, input_zip, expected):
1084    """Verifies the input ZIP file contains the expected property-files string.
1085
1086    Args:
1087      input_zip: The input ZIP file.
1088      expected: The property-files string that's computed from Finalize().
1089
1090    Raises:
1091      AssertionError: On finding a mismatch.
1092    """
1093    actual = self.GetPropertyFilesString(input_zip)
1094    assert actual == expected, \
1095        "Mismatching streaming metadata: {} vs {}.".format(actual, expected)
1096
1097  def GetPropertyFilesString(self, zip_file, reserve_space=False):
1098    """
1099    Constructs the property-files string per request.
1100
1101    Args:
1102      zip_file: The input ZIP file.
1103      reserved_length: The reserved length of the property-files string.
1104
1105    Returns:
1106      A property-files string including the metadata offset/size info, e.g.
1107      "payload.bin:679:343,payload_properties.txt:378:45,metadata:     ".
1108    """
1109
1110    def ComputeEntryOffsetSize(name):
1111      """Computes the zip entry offset and size."""
1112      info = zip_file.getinfo(name)
1113      offset = info.header_offset
1114      offset += zipfile.sizeFileHeader
1115      offset += len(info.extra) + len(info.filename)
1116      size = info.file_size
1117      return '%s:%d:%d' % (os.path.basename(name), offset, size)
1118
1119    tokens = []
1120    tokens.extend(self._GetPrecomputed(zip_file))
1121    for entry in self.required:
1122      tokens.append(ComputeEntryOffsetSize(entry))
1123    for entry in self.optional:
1124      if entry in zip_file.namelist():
1125        tokens.append(ComputeEntryOffsetSize(entry))
1126
1127    # 'META-INF/com/android/metadata' is required. We don't know its actual
1128    # offset and length (as well as the values for other entries). So we reserve
1129    # 15-byte as a placeholder ('offset:length'), which is sufficient to cover
1130    # the space for metadata entry. Because 'offset' allows a max of 10-digit
1131    # (i.e. ~9 GiB), with a max of 4-digit for the length. Note that all the
1132    # reserved space serves the metadata entry only.
1133    if reserve_space:
1134      tokens.append('metadata:' + ' ' * 15)
1135    else:
1136      tokens.append(ComputeEntryOffsetSize(METADATA_NAME))
1137
1138    return ','.join(tokens)
1139
1140  def _GetPrecomputed(self, input_zip):
1141    """Computes the additional tokens to be included into the property-files.
1142
1143    This applies to tokens without actual ZIP entries, such as
1144    payload_metadadata.bin. We want to expose the offset/size to updaters, so
1145    that they can download the payload metadata directly with the info.
1146
1147    Args:
1148      input_zip: The input zip file.
1149
1150    Returns:
1151      A list of strings (tokens) to be added to the property-files string.
1152    """
1153    # pylint: disable=no-self-use
1154    # pylint: disable=unused-argument
1155    return []
1156
1157
1158class StreamingPropertyFiles(PropertyFiles):
1159  """A subclass for computing the property-files for streaming A/B OTAs."""
1160
1161  def __init__(self):
1162    super(StreamingPropertyFiles, self).__init__()
1163    self.name = 'ota-streaming-property-files'
1164    self.required = (
1165        # payload.bin and payload_properties.txt must exist.
1166        'payload.bin',
1167        'payload_properties.txt',
1168    )
1169    self.optional = (
1170        # care_map is available only if dm-verity is enabled.
1171        'care_map.pb',
1172        'care_map.txt',
1173        # compatibility.zip is available only if target supports Treble.
1174        'compatibility.zip',
1175    )
1176
1177
1178class AbOtaPropertyFiles(StreamingPropertyFiles):
1179  """The property-files for A/B OTA that includes payload_metadata.bin info.
1180
1181  Since P, we expose one more token (aka property-file), in addition to the ones
1182  for streaming A/B OTA, for a virtual entry of 'payload_metadata.bin'.
1183  'payload_metadata.bin' is the header part of a payload ('payload.bin'), which
1184  doesn't exist as a separate ZIP entry, but can be used to verify if the
1185  payload can be applied on the given device.
1186
1187  For backward compatibility, we keep both of the 'ota-streaming-property-files'
1188  and the newly added 'ota-property-files' in P. The new token will only be
1189  available in 'ota-property-files'.
1190  """
1191
1192  def __init__(self):
1193    super(AbOtaPropertyFiles, self).__init__()
1194    self.name = 'ota-property-files'
1195
1196  def _GetPrecomputed(self, input_zip):
1197    offset, size = self._GetPayloadMetadataOffsetAndSize(input_zip)
1198    return ['payload_metadata.bin:{}:{}'.format(offset, size)]
1199
1200  @staticmethod
1201  def _GetPayloadMetadataOffsetAndSize(input_zip):
1202    """Computes the offset and size of the payload metadata for a given package.
1203
1204    (From system/update_engine/update_metadata.proto)
1205    A delta update file contains all the deltas needed to update a system from
1206    one specific version to another specific version. The update format is
1207    represented by this struct pseudocode:
1208
1209    struct delta_update_file {
1210      char magic[4] = "CrAU";
1211      uint64 file_format_version;
1212      uint64 manifest_size;  // Size of protobuf DeltaArchiveManifest
1213
1214      // Only present if format_version > 1:
1215      uint32 metadata_signature_size;
1216
1217      // The Bzip2 compressed DeltaArchiveManifest
1218      char manifest[metadata_signature_size];
1219
1220      // The signature of the metadata (from the beginning of the payload up to
1221      // this location, not including the signature itself). This is a
1222      // serialized Signatures message.
1223      char medatada_signature_message[metadata_signature_size];
1224
1225      // Data blobs for files, no specific format. The specific offset
1226      // and length of each data blob is recorded in the DeltaArchiveManifest.
1227      struct {
1228        char data[];
1229      } blobs[];
1230
1231      // These two are not signed:
1232      uint64 payload_signatures_message_size;
1233      char payload_signatures_message[];
1234    };
1235
1236    'payload-metadata.bin' contains all the bytes from the beginning of the
1237    payload, till the end of 'medatada_signature_message'.
1238    """
1239    payload_info = input_zip.getinfo('payload.bin')
1240    payload_offset = payload_info.header_offset
1241    payload_offset += zipfile.sizeFileHeader
1242    payload_offset += len(payload_info.extra) + len(payload_info.filename)
1243    payload_size = payload_info.file_size
1244
1245    with input_zip.open('payload.bin') as payload_fp:
1246      header_bin = payload_fp.read(24)
1247
1248    # network byte order (big-endian)
1249    header = struct.unpack("!IQQL", header_bin)
1250
1251    # 'CrAU'
1252    magic = header[0]
1253    assert magic == 0x43724155, "Invalid magic: {:x}".format(magic)
1254
1255    manifest_size = header[2]
1256    metadata_signature_size = header[3]
1257    metadata_total = 24 + manifest_size + metadata_signature_size
1258    assert metadata_total < payload_size
1259
1260    return (payload_offset, metadata_total)
1261
1262
1263class NonAbOtaPropertyFiles(PropertyFiles):
1264  """The property-files for non-A/B OTA.
1265
1266  For non-A/B OTA, the property-files string contains the info for METADATA
1267  entry, with which a system updater can be fetched the package metadata prior
1268  to downloading the entire package.
1269  """
1270
1271  def __init__(self):
1272    super(NonAbOtaPropertyFiles, self).__init__()
1273    self.name = 'ota-property-files'
1274
1275
1276def FinalizeMetadata(metadata, input_file, output_file, needed_property_files):
1277  """Finalizes the metadata and signs an A/B OTA package.
1278
1279  In order to stream an A/B OTA package, we need 'ota-streaming-property-files'
1280  that contains the offsets and sizes for the ZIP entries. An example
1281  property-files string is as follows.
1282
1283    "payload.bin:679:343,payload_properties.txt:378:45,metadata:69:379"
1284
1285  OTA server can pass down this string, in addition to the package URL, to the
1286  system update client. System update client can then fetch individual ZIP
1287  entries (ZIP_STORED) directly at the given offset of the URL.
1288
1289  Args:
1290    metadata: The metadata dict for the package.
1291    input_file: The input ZIP filename that doesn't contain the package METADATA
1292        entry yet.
1293    output_file: The final output ZIP filename.
1294    needed_property_files: The list of PropertyFiles' to be generated.
1295  """
1296
1297  def ComputeAllPropertyFiles(input_file, needed_property_files):
1298    # Write the current metadata entry with placeholders.
1299    with zipfile.ZipFile(input_file) as input_zip:
1300      for property_files in needed_property_files:
1301        metadata[property_files.name] = property_files.Compute(input_zip)
1302      namelist = input_zip.namelist()
1303
1304    if METADATA_NAME in namelist:
1305      common.ZipDelete(input_file, METADATA_NAME)
1306    output_zip = zipfile.ZipFile(input_file, 'a')
1307    WriteMetadata(metadata, output_zip)
1308    common.ZipClose(output_zip)
1309
1310    if OPTIONS.no_signing:
1311      return input_file
1312
1313    prelim_signing = common.MakeTempFile(suffix='.zip')
1314    SignOutput(input_file, prelim_signing)
1315    return prelim_signing
1316
1317  def FinalizeAllPropertyFiles(prelim_signing, needed_property_files):
1318    with zipfile.ZipFile(prelim_signing) as prelim_signing_zip:
1319      for property_files in needed_property_files:
1320        metadata[property_files.name] = property_files.Finalize(
1321            prelim_signing_zip, len(metadata[property_files.name]))
1322
1323  # SignOutput(), which in turn calls signapk.jar, will possibly reorder the ZIP
1324  # entries, as well as padding the entry headers. We do a preliminary signing
1325  # (with an incomplete metadata entry) to allow that to happen. Then compute
1326  # the ZIP entry offsets, write back the final metadata and do the final
1327  # signing.
1328  prelim_signing = ComputeAllPropertyFiles(input_file, needed_property_files)
1329  try:
1330    FinalizeAllPropertyFiles(prelim_signing, needed_property_files)
1331  except PropertyFiles.InsufficientSpaceException:
1332    # Even with the preliminary signing, the entry orders may change
1333    # dramatically, which leads to insufficiently reserved space during the
1334    # first call to ComputeAllPropertyFiles(). In that case, we redo all the
1335    # preliminary signing works, based on the already ordered ZIP entries, to
1336    # address the issue.
1337    prelim_signing = ComputeAllPropertyFiles(
1338        prelim_signing, needed_property_files)
1339    FinalizeAllPropertyFiles(prelim_signing, needed_property_files)
1340
1341  # Replace the METADATA entry.
1342  common.ZipDelete(prelim_signing, METADATA_NAME)
1343  output_zip = zipfile.ZipFile(prelim_signing, 'a')
1344  WriteMetadata(metadata, output_zip)
1345  common.ZipClose(output_zip)
1346
1347  # Re-sign the package after updating the metadata entry.
1348  if OPTIONS.no_signing:
1349    output_file = prelim_signing
1350  else:
1351    SignOutput(prelim_signing, output_file)
1352
1353  # Reopen the final signed zip to double check the streaming metadata.
1354  with zipfile.ZipFile(output_file) as output_zip:
1355    for property_files in needed_property_files:
1356      property_files.Verify(output_zip, metadata[property_files.name].strip())
1357
1358  # If requested, dump the metadata to a separate file.
1359  output_metadata_path = OPTIONS.output_metadata_path
1360  if output_metadata_path:
1361    WriteMetadata(metadata, output_metadata_path)
1362
1363
1364def WriteBlockIncrementalOTAPackage(target_zip, source_zip, output_file):
1365  target_info = common.BuildInfo(OPTIONS.target_info_dict, OPTIONS.oem_dicts)
1366  source_info = common.BuildInfo(OPTIONS.source_info_dict, OPTIONS.oem_dicts)
1367
1368  target_api_version = target_info["recovery_api_version"]
1369  source_api_version = source_info["recovery_api_version"]
1370  if source_api_version == 0:
1371    logger.warning(
1372        "Generating edify script for a source that can't install it.")
1373
1374  script = edify_generator.EdifyGenerator(
1375      source_api_version, target_info, fstab=source_info["fstab"])
1376
1377  if target_info.oem_props or source_info.oem_props:
1378    if not OPTIONS.oem_no_mount:
1379      source_info.WriteMountOemScript(script)
1380
1381  metadata = GetPackageMetadata(target_info, source_info)
1382
1383  if not OPTIONS.no_signing:
1384    staging_file = common.MakeTempFile(suffix='.zip')
1385  else:
1386    staging_file = output_file
1387
1388  output_zip = zipfile.ZipFile(
1389      staging_file, "w", compression=zipfile.ZIP_DEFLATED)
1390
1391  device_specific = common.DeviceSpecificParams(
1392      source_zip=source_zip,
1393      source_version=source_api_version,
1394      source_tmp=OPTIONS.source_tmp,
1395      target_zip=target_zip,
1396      target_version=target_api_version,
1397      target_tmp=OPTIONS.target_tmp,
1398      output_zip=output_zip,
1399      script=script,
1400      metadata=metadata,
1401      info_dict=source_info)
1402
1403  source_boot = common.GetBootableImage(
1404      "/tmp/boot.img", "boot.img", OPTIONS.source_tmp, "BOOT", source_info)
1405  target_boot = common.GetBootableImage(
1406      "/tmp/boot.img", "boot.img", OPTIONS.target_tmp, "BOOT", target_info)
1407  updating_boot = (not OPTIONS.two_step and
1408                   (source_boot.data != target_boot.data))
1409
1410  target_recovery = common.GetBootableImage(
1411      "/tmp/recovery.img", "recovery.img", OPTIONS.target_tmp, "RECOVERY")
1412
1413  block_diff_dict = GetBlockDifferences(target_zip=target_zip,
1414                                        source_zip=source_zip,
1415                                        target_info=target_info,
1416                                        source_info=source_info,
1417                                        device_specific=device_specific)
1418
1419  CheckVintfIfTrebleEnabled(OPTIONS.target_tmp, target_info)
1420
1421  # Assertions (e.g. device properties check).
1422  target_info.WriteDeviceAssertions(script, OPTIONS.oem_no_mount)
1423  device_specific.IncrementalOTA_Assertions()
1424
1425  # Two-step incremental package strategy (in chronological order,
1426  # which is *not* the order in which the generated script has
1427  # things):
1428  #
1429  # if stage is not "2/3" or "3/3":
1430  #    do verification on current system
1431  #    write recovery image to boot partition
1432  #    set stage to "2/3"
1433  #    reboot to boot partition and restart recovery
1434  # else if stage is "2/3":
1435  #    write recovery image to recovery partition
1436  #    set stage to "3/3"
1437  #    reboot to recovery partition and restart recovery
1438  # else:
1439  #    (stage must be "3/3")
1440  #    perform update:
1441  #       patch system files, etc.
1442  #       force full install of new boot image
1443  #       set up system to update recovery partition on first boot
1444  #    complete script normally
1445  #    (allow recovery to mark itself finished and reboot)
1446
1447  if OPTIONS.two_step:
1448    if not source_info.get("multistage_support"):
1449      assert False, "two-step packages not supported by this build"
1450    fs = source_info["fstab"]["/misc"]
1451    assert fs.fs_type.upper() == "EMMC", \
1452        "two-step packages only supported on devices with EMMC /misc partitions"
1453    bcb_dev = {"bcb_dev" : fs.device}
1454    common.ZipWriteStr(output_zip, "recovery.img", target_recovery.data)
1455    script.AppendExtra("""
1456if get_stage("%(bcb_dev)s") == "2/3" then
1457""" % bcb_dev)
1458
1459    # Stage 2/3: Write recovery image to /recovery (currently running /boot).
1460    script.Comment("Stage 2/3")
1461    script.AppendExtra("sleep(20);\n")
1462    script.WriteRawImage("/recovery", "recovery.img")
1463    script.AppendExtra("""
1464set_stage("%(bcb_dev)s", "3/3");
1465reboot_now("%(bcb_dev)s", "recovery");
1466else if get_stage("%(bcb_dev)s") != "3/3" then
1467""" % bcb_dev)
1468
1469    # Stage 1/3: (a) Verify the current system.
1470    script.Comment("Stage 1/3")
1471
1472  # Dump fingerprints
1473  script.Print("Source: {}".format(source_info.fingerprint))
1474  script.Print("Target: {}".format(target_info.fingerprint))
1475
1476  script.Print("Verifying current system...")
1477
1478  device_specific.IncrementalOTA_VerifyBegin()
1479
1480  WriteFingerprintAssertion(script, target_info, source_info)
1481
1482  # Check the required cache size (i.e. stashed blocks).
1483  required_cache_sizes = [diff.required_cache for diff in
1484                          block_diff_dict.values()]
1485  if updating_boot:
1486    boot_type, boot_device_expr = common.GetTypeAndDeviceExpr("/boot",
1487                                                              source_info)
1488    d = common.Difference(target_boot, source_boot)
1489    _, _, d = d.ComputePatch()
1490    if d is None:
1491      include_full_boot = True
1492      common.ZipWriteStr(output_zip, "boot.img", target_boot.data)
1493    else:
1494      include_full_boot = False
1495
1496      logger.info(
1497          "boot      target: %d  source: %d  diff: %d", target_boot.size,
1498          source_boot.size, len(d))
1499
1500      common.ZipWriteStr(output_zip, "boot.img.p", d)
1501
1502      target_expr = 'concat("{}:",{},":{}:{}")'.format(
1503          boot_type, boot_device_expr, target_boot.size, target_boot.sha1)
1504      source_expr = 'concat("{}:",{},":{}:{}")'.format(
1505          boot_type, boot_device_expr, source_boot.size, source_boot.sha1)
1506      script.PatchPartitionExprCheck(target_expr, source_expr)
1507
1508      required_cache_sizes.append(target_boot.size)
1509
1510  if required_cache_sizes:
1511    script.CacheFreeSpaceCheck(max(required_cache_sizes))
1512
1513  # Verify the existing partitions.
1514  for diff in block_diff_dict.values():
1515    diff.WriteVerifyScript(script, touched_blocks_only=True)
1516
1517  device_specific.IncrementalOTA_VerifyEnd()
1518
1519  if OPTIONS.two_step:
1520    # Stage 1/3: (b) Write recovery image to /boot.
1521    _WriteRecoveryImageToBoot(script, output_zip)
1522
1523    script.AppendExtra("""
1524set_stage("%(bcb_dev)s", "2/3");
1525reboot_now("%(bcb_dev)s", "");
1526else
1527""" % bcb_dev)
1528
1529    # Stage 3/3: Make changes.
1530    script.Comment("Stage 3/3")
1531
1532  script.Comment("---- start making changes here ----")
1533
1534  device_specific.IncrementalOTA_InstallBegin()
1535
1536  progress_dict = {partition: 0.1 for partition in block_diff_dict}
1537  progress_dict["system"] = 1 - len(block_diff_dict) * 0.1
1538
1539  if OPTIONS.source_info_dict.get("use_dynamic_partitions") == "true":
1540    if OPTIONS.target_info_dict.get("use_dynamic_partitions") != "true":
1541      raise RuntimeError(
1542          "can't generate incremental that disables dynamic partitions")
1543    dynamic_partitions_diff = common.DynamicPartitionsDifference(
1544        info_dict=OPTIONS.target_info_dict,
1545        source_info_dict=OPTIONS.source_info_dict,
1546        block_diffs=block_diff_dict.values(),
1547        progress_dict=progress_dict)
1548    dynamic_partitions_diff.WriteScript(
1549        script, output_zip, write_verify_script=OPTIONS.verify)
1550  else:
1551    for block_diff in block_diff_dict.values():
1552      block_diff.WriteScript(script, output_zip,
1553                             progress=progress_dict.get(block_diff.partition),
1554                             write_verify_script=OPTIONS.verify)
1555
1556  if OPTIONS.two_step:
1557    common.ZipWriteStr(output_zip, "boot.img", target_boot.data)
1558    script.WriteRawImage("/boot", "boot.img")
1559    logger.info("writing full boot image (forced by two-step mode)")
1560
1561  if not OPTIONS.two_step:
1562    if updating_boot:
1563      if include_full_boot:
1564        logger.info("boot image changed; including full.")
1565        script.Print("Installing boot image...")
1566        script.WriteRawImage("/boot", "boot.img")
1567      else:
1568        # Produce the boot image by applying a patch to the current
1569        # contents of the boot partition, and write it back to the
1570        # partition.
1571        logger.info("boot image changed; including patch.")
1572        script.Print("Patching boot image...")
1573        script.ShowProgress(0.1, 10)
1574        target_expr = 'concat("{}:",{},":{}:{}")'.format(
1575            boot_type, boot_device_expr, target_boot.size, target_boot.sha1)
1576        source_expr = 'concat("{}:",{},":{}:{}")'.format(
1577            boot_type, boot_device_expr, source_boot.size, source_boot.sha1)
1578        script.PatchPartitionExpr(target_expr, source_expr, '"boot.img.p"')
1579    else:
1580      logger.info("boot image unchanged; skipping.")
1581
1582  # Do device-specific installation (eg, write radio image).
1583  device_specific.IncrementalOTA_InstallEnd()
1584
1585  if OPTIONS.extra_script is not None:
1586    script.AppendExtra(OPTIONS.extra_script)
1587
1588  if OPTIONS.wipe_user_data:
1589    script.Print("Erasing user data...")
1590    script.FormatPartition("/data")
1591
1592  if OPTIONS.two_step:
1593    script.AppendExtra("""
1594set_stage("%(bcb_dev)s", "");
1595endif;
1596endif;
1597""" % bcb_dev)
1598
1599  script.SetProgress(1)
1600  # For downgrade OTAs, we prefer to use the update-binary in the source
1601  # build that is actually newer than the one in the target build.
1602  if OPTIONS.downgrade:
1603    script.AddToZip(source_zip, output_zip, input_path=OPTIONS.updater_binary)
1604  else:
1605    script.AddToZip(target_zip, output_zip, input_path=OPTIONS.updater_binary)
1606  metadata["ota-required-cache"] = str(script.required_cache)
1607
1608  # We haven't written the metadata entry yet, which will be handled in
1609  # FinalizeMetadata().
1610  common.ZipClose(output_zip)
1611
1612  # Sign the generated zip package unless no_signing is specified.
1613  needed_property_files = (
1614      NonAbOtaPropertyFiles(),
1615  )
1616  FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
1617
1618
1619def GetTargetFilesZipForSecondaryImages(input_file, skip_postinstall=False):
1620  """Returns a target-files.zip file for generating secondary payload.
1621
1622  Although the original target-files.zip already contains secondary slot
1623  images (i.e. IMAGES/system_other.img), we need to rename the files to the
1624  ones without _other suffix. Note that we cannot instead modify the names in
1625  META/ab_partitions.txt, because there are no matching partitions on device.
1626
1627  For the partitions that don't have secondary images, the ones for primary
1628  slot will be used. This is to ensure that we always have valid boot, vbmeta,
1629  bootloader images in the inactive slot.
1630
1631  Args:
1632    input_file: The input target-files.zip file.
1633    skip_postinstall: Whether to skip copying the postinstall config file.
1634
1635  Returns:
1636    The filename of the target-files.zip for generating secondary payload.
1637  """
1638
1639  def GetInfoForSecondaryImages(info_file):
1640    """Updates info file for secondary payload generation.
1641
1642    Scan each line in the info file, and remove the unwanted partitions from
1643    the dynamic partition list in the related properties. e.g.
1644    "super_google_dynamic_partitions_partition_list=system vendor product"
1645    will become "super_google_dynamic_partitions_partition_list=system".
1646
1647    Args:
1648      info_file: The input info file. e.g. misc_info.txt.
1649
1650    Returns:
1651      A string of the updated info content.
1652    """
1653
1654    output_list = []
1655    with open(info_file) as f:
1656      lines = f.read().splitlines()
1657
1658    # The suffix in partition_list variables that follows the name of the
1659    # partition group.
1660    LIST_SUFFIX = 'partition_list'
1661    for line in lines:
1662      if line.startswith('#') or '=' not in line:
1663        output_list.append(line)
1664        continue
1665      key, value = line.strip().split('=', 1)
1666      if key == 'dynamic_partition_list' or key.endswith(LIST_SUFFIX):
1667        partitions = value.split()
1668        partitions = [partition for partition in partitions if partition
1669                      not in SECONDARY_PAYLOAD_SKIPPED_IMAGES]
1670        output_list.append('{}={}'.format(key, ' '.join(partitions)))
1671      elif key == 'virtual_ab' or key == "virtual_ab_retrofit":
1672        # Remove virtual_ab flag from secondary payload so that OTA client
1673        # don't use snapshots for secondary update
1674        pass
1675      else:
1676        output_list.append(line)
1677    return '\n'.join(output_list)
1678
1679  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
1680  target_zip = zipfile.ZipFile(target_file, 'w', allowZip64=True)
1681
1682  with zipfile.ZipFile(input_file, 'r') as input_zip:
1683    infolist = input_zip.infolist()
1684
1685  input_tmp = common.UnzipTemp(input_file, UNZIP_PATTERN)
1686  for info in infolist:
1687    unzipped_file = os.path.join(input_tmp, *info.filename.split('/'))
1688    if info.filename == 'IMAGES/system_other.img':
1689      common.ZipWrite(target_zip, unzipped_file, arcname='IMAGES/system.img')
1690
1691    # Primary images and friends need to be skipped explicitly.
1692    elif info.filename in ('IMAGES/system.img',
1693                           'IMAGES/system.map'):
1694      pass
1695
1696    # Copy images that are not in SECONDARY_PAYLOAD_SKIPPED_IMAGES.
1697    elif info.filename.startswith(('IMAGES/', 'RADIO/')):
1698      image_name = os.path.basename(info.filename)
1699      if image_name not in ['{}.img'.format(partition) for partition in
1700                            SECONDARY_PAYLOAD_SKIPPED_IMAGES]:
1701        common.ZipWrite(target_zip, unzipped_file, arcname=info.filename)
1702
1703    # Skip copying the postinstall config if requested.
1704    elif skip_postinstall and info.filename == POSTINSTALL_CONFIG:
1705      pass
1706
1707    elif info.filename.startswith('META/'):
1708      # Remove the unnecessary partitions for secondary images from the
1709      # ab_partitions file.
1710      if info.filename == AB_PARTITIONS:
1711        with open(unzipped_file) as f:
1712          partition_list = f.read().splitlines()
1713        partition_list = [partition for partition in partition_list if partition
1714                          and partition not in SECONDARY_PAYLOAD_SKIPPED_IMAGES]
1715        common.ZipWriteStr(target_zip, info.filename, '\n'.join(partition_list))
1716      # Remove the unnecessary partitions from the dynamic partitions list.
1717      elif (info.filename == 'META/misc_info.txt' or
1718            info.filename == DYNAMIC_PARTITION_INFO):
1719        modified_info = GetInfoForSecondaryImages(unzipped_file)
1720        common.ZipWriteStr(target_zip, info.filename, modified_info)
1721      else:
1722        common.ZipWrite(target_zip, unzipped_file, arcname=info.filename)
1723
1724  common.ZipClose(target_zip)
1725
1726  return target_file
1727
1728
1729def GetTargetFilesZipWithoutPostinstallConfig(input_file):
1730  """Returns a target-files.zip that's not containing postinstall_config.txt.
1731
1732  This allows brillo_update_payload script to skip writing all the postinstall
1733  hooks in the generated payload. The input target-files.zip file will be
1734  duplicated, with 'META/postinstall_config.txt' skipped. If input_file doesn't
1735  contain the postinstall_config.txt entry, the input file will be returned.
1736
1737  Args:
1738    input_file: The input target-files.zip filename.
1739
1740  Returns:
1741    The filename of target-files.zip that doesn't contain postinstall config.
1742  """
1743  # We should only make a copy if postinstall_config entry exists.
1744  with zipfile.ZipFile(input_file, 'r') as input_zip:
1745    if POSTINSTALL_CONFIG not in input_zip.namelist():
1746      return input_file
1747
1748  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
1749  shutil.copyfile(input_file, target_file)
1750  common.ZipDelete(target_file, POSTINSTALL_CONFIG)
1751  return target_file
1752
1753
1754def GetTargetFilesZipForRetrofitDynamicPartitions(input_file,
1755                                                  super_block_devices,
1756                                                  dynamic_partition_list):
1757  """Returns a target-files.zip for retrofitting dynamic partitions.
1758
1759  This allows brillo_update_payload to generate an OTA based on the exact
1760  bits on the block devices. Postinstall is disabled.
1761
1762  Args:
1763    input_file: The input target-files.zip filename.
1764    super_block_devices: The list of super block devices
1765    dynamic_partition_list: The list of dynamic partitions
1766
1767  Returns:
1768    The filename of target-files.zip with *.img replaced with super_*.img for
1769    each block device in super_block_devices.
1770  """
1771  assert super_block_devices, "No super_block_devices are specified."
1772
1773  replace = {'OTA/super_{}.img'.format(dev): 'IMAGES/{}.img'.format(dev)
1774             for dev in super_block_devices}
1775
1776  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
1777  shutil.copyfile(input_file, target_file)
1778
1779  with zipfile.ZipFile(input_file) as input_zip:
1780    namelist = input_zip.namelist()
1781
1782  input_tmp = common.UnzipTemp(input_file, RETROFIT_DAP_UNZIP_PATTERN)
1783
1784  # Remove partitions from META/ab_partitions.txt that is in
1785  # dynamic_partition_list but not in super_block_devices so that
1786  # brillo_update_payload won't generate update for those logical partitions.
1787  ab_partitions_file = os.path.join(input_tmp, *AB_PARTITIONS.split('/'))
1788  with open(ab_partitions_file) as f:
1789    ab_partitions_lines = f.readlines()
1790    ab_partitions = [line.strip() for line in ab_partitions_lines]
1791  # Assert that all super_block_devices are in ab_partitions
1792  super_device_not_updated = [partition for partition in super_block_devices
1793                              if partition not in ab_partitions]
1794  assert not super_device_not_updated, \
1795      "{} is in super_block_devices but not in {}".format(
1796          super_device_not_updated, AB_PARTITIONS)
1797  # ab_partitions -= (dynamic_partition_list - super_block_devices)
1798  new_ab_partitions = common.MakeTempFile(prefix="ab_partitions", suffix=".txt")
1799  with open(new_ab_partitions, 'w') as f:
1800    for partition in ab_partitions:
1801      if (partition in dynamic_partition_list and
1802          partition not in super_block_devices):
1803        logger.info("Dropping %s from ab_partitions.txt", partition)
1804        continue
1805      f.write(partition + "\n")
1806  to_delete = [AB_PARTITIONS]
1807
1808  # Always skip postinstall for a retrofit update.
1809  to_delete += [POSTINSTALL_CONFIG]
1810
1811  # Delete dynamic_partitions_info.txt so that brillo_update_payload thinks this
1812  # is a regular update on devices without dynamic partitions support.
1813  to_delete += [DYNAMIC_PARTITION_INFO]
1814
1815  # Remove the existing partition images as well as the map files.
1816  to_delete += list(replace.values())
1817  to_delete += ['IMAGES/{}.map'.format(dev) for dev in super_block_devices]
1818
1819  common.ZipDelete(target_file, to_delete)
1820
1821  target_zip = zipfile.ZipFile(target_file, 'a', allowZip64=True)
1822
1823  # Write super_{foo}.img as {foo}.img.
1824  for src, dst in replace.items():
1825    assert src in namelist, \
1826        'Missing {} in {}; {} cannot be written'.format(src, input_file, dst)
1827    unzipped_file = os.path.join(input_tmp, *src.split('/'))
1828    common.ZipWrite(target_zip, unzipped_file, arcname=dst)
1829
1830  # Write new ab_partitions.txt file
1831  common.ZipWrite(target_zip, new_ab_partitions, arcname=AB_PARTITIONS)
1832
1833  common.ZipClose(target_zip)
1834
1835  return target_file
1836
1837
1838def GenerateAbOtaPackage(target_file, output_file, source_file=None):
1839  """Generates an Android OTA package that has A/B update payload."""
1840  # Stage the output zip package for package signing.
1841  if not OPTIONS.no_signing:
1842    staging_file = common.MakeTempFile(suffix='.zip')
1843  else:
1844    staging_file = output_file
1845  output_zip = zipfile.ZipFile(staging_file, "w",
1846                               compression=zipfile.ZIP_DEFLATED)
1847
1848  if source_file is not None:
1849    target_info = common.BuildInfo(OPTIONS.target_info_dict, OPTIONS.oem_dicts)
1850    source_info = common.BuildInfo(OPTIONS.source_info_dict, OPTIONS.oem_dicts)
1851  else:
1852    target_info = common.BuildInfo(OPTIONS.info_dict, OPTIONS.oem_dicts)
1853    source_info = None
1854
1855  # Metadata to comply with Android OTA package format.
1856  metadata = GetPackageMetadata(target_info, source_info)
1857
1858  if OPTIONS.retrofit_dynamic_partitions:
1859    target_file = GetTargetFilesZipForRetrofitDynamicPartitions(
1860        target_file, target_info.get("super_block_devices").strip().split(),
1861        target_info.get("dynamic_partition_list").strip().split())
1862  elif OPTIONS.skip_postinstall:
1863    target_file = GetTargetFilesZipWithoutPostinstallConfig(target_file)
1864
1865  # Generate payload.
1866  payload = Payload()
1867
1868  # Enforce a max timestamp this payload can be applied on top of.
1869  if OPTIONS.downgrade:
1870    max_timestamp = source_info.GetBuildProp("ro.build.date.utc")
1871  else:
1872    max_timestamp = metadata["post-timestamp"]
1873  additional_args = ["--max_timestamp", max_timestamp]
1874
1875  payload.Generate(target_file, source_file, additional_args)
1876
1877  # Sign the payload.
1878  payload_signer = PayloadSigner()
1879  payload.Sign(payload_signer)
1880
1881  # Write the payload into output zip.
1882  payload.WriteToZip(output_zip)
1883
1884  # Generate and include the secondary payload that installs secondary images
1885  # (e.g. system_other.img).
1886  if OPTIONS.include_secondary:
1887    # We always include a full payload for the secondary slot, even when
1888    # building an incremental OTA. See the comments for "--include_secondary".
1889    secondary_target_file = GetTargetFilesZipForSecondaryImages(
1890        target_file, OPTIONS.skip_postinstall)
1891    secondary_payload = Payload(secondary=True)
1892    secondary_payload.Generate(secondary_target_file,
1893                               additional_args=additional_args)
1894    secondary_payload.Sign(payload_signer)
1895    secondary_payload.WriteToZip(output_zip)
1896
1897  # If dm-verity is supported for the device, copy contents of care_map
1898  # into A/B OTA package.
1899  target_zip = zipfile.ZipFile(target_file, "r")
1900  if (target_info.get("verity") == "true" or
1901      target_info.get("avb_enable") == "true"):
1902    care_map_list = [x for x in ["care_map.pb", "care_map.txt"] if
1903                     "META/" + x in target_zip.namelist()]
1904
1905    # Adds care_map if either the protobuf format or the plain text one exists.
1906    if care_map_list:
1907      care_map_name = care_map_list[0]
1908      care_map_data = target_zip.read("META/" + care_map_name)
1909      # In order to support streaming, care_map needs to be packed as
1910      # ZIP_STORED.
1911      common.ZipWriteStr(output_zip, care_map_name, care_map_data,
1912                         compress_type=zipfile.ZIP_STORED)
1913    else:
1914      logger.warning("Cannot find care map file in target_file package")
1915
1916  common.ZipClose(target_zip)
1917
1918  CheckVintfIfTrebleEnabled(target_file, target_info)
1919
1920  # We haven't written the metadata entry yet, which will be handled in
1921  # FinalizeMetadata().
1922  common.ZipClose(output_zip)
1923
1924  # AbOtaPropertyFiles intends to replace StreamingPropertyFiles, as it covers
1925  # all the info of the latter. However, system updaters and OTA servers need to
1926  # take time to switch to the new flag. We keep both of the flags for
1927  # P-timeframe, and will remove StreamingPropertyFiles in later release.
1928  needed_property_files = (
1929      AbOtaPropertyFiles(),
1930      StreamingPropertyFiles(),
1931  )
1932  FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
1933
1934
1935def GenerateNonAbOtaPackage(target_file, output_file, source_file=None):
1936  """Generates a non-A/B OTA package."""
1937  # Sanity check the loaded info dicts first.
1938  if OPTIONS.info_dict.get("no_recovery") == "true":
1939    raise common.ExternalError(
1940        "--- target build has specified no recovery ---")
1941
1942  # Non-A/B OTAs rely on /cache partition to store temporary files.
1943  cache_size = OPTIONS.info_dict.get("cache_size")
1944  if cache_size is None:
1945    logger.warning("--- can't determine the cache partition size ---")
1946  OPTIONS.cache_size = cache_size
1947
1948  if OPTIONS.extra_script is not None:
1949    with open(OPTIONS.extra_script) as fp:
1950      OPTIONS.extra_script = fp.read()
1951
1952  if OPTIONS.extracted_input is not None:
1953    OPTIONS.input_tmp = OPTIONS.extracted_input
1954  else:
1955    logger.info("unzipping target target-files...")
1956    OPTIONS.input_tmp = common.UnzipTemp(target_file, UNZIP_PATTERN)
1957  OPTIONS.target_tmp = OPTIONS.input_tmp
1958
1959  # If the caller explicitly specified the device-specific extensions path via
1960  # -s / --device_specific, use that. Otherwise, use META/releasetools.py if it
1961  # is present in the target target_files. Otherwise, take the path of the file
1962  # from 'tool_extensions' in the info dict and look for that in the local
1963  # filesystem, relative to the current directory.
1964  if OPTIONS.device_specific is None:
1965    from_input = os.path.join(OPTIONS.input_tmp, "META", "releasetools.py")
1966    if os.path.exists(from_input):
1967      logger.info("(using device-specific extensions from target_files)")
1968      OPTIONS.device_specific = from_input
1969    else:
1970      OPTIONS.device_specific = OPTIONS.info_dict.get("tool_extensions")
1971
1972  if OPTIONS.device_specific is not None:
1973    OPTIONS.device_specific = os.path.abspath(OPTIONS.device_specific)
1974
1975  # Generate a full OTA.
1976  if source_file is None:
1977    with zipfile.ZipFile(target_file) as input_zip:
1978      WriteFullOTAPackage(
1979          input_zip,
1980          output_file)
1981
1982  # Generate an incremental OTA.
1983  else:
1984    logger.info("unzipping source target-files...")
1985    OPTIONS.source_tmp = common.UnzipTemp(
1986        OPTIONS.incremental_source, UNZIP_PATTERN)
1987    with zipfile.ZipFile(target_file) as input_zip, \
1988        zipfile.ZipFile(source_file) as source_zip:
1989      WriteBlockIncrementalOTAPackage(
1990          input_zip,
1991          source_zip,
1992          output_file)
1993
1994
1995def CalculateRuntimeDevicesAndFingerprints(build_info, boot_variable_values):
1996  """Returns a tuple of sets for runtime devices and fingerprints"""
1997
1998  device_names = {build_info.device}
1999  fingerprints = {build_info.fingerprint}
2000
2001  if not boot_variable_values:
2002    return device_names, fingerprints
2003
2004  # Calculate all possible combinations of the values for the boot variables.
2005  keys = boot_variable_values.keys()
2006  value_list = boot_variable_values.values()
2007  combinations = [dict(zip(keys, values))
2008                  for values in itertools.product(*value_list)]
2009  for placeholder_values in combinations:
2010    # Reload the info_dict as some build properties may change their values
2011    # based on the value of ro.boot* properties.
2012    info_dict = copy.deepcopy(build_info.info_dict)
2013    for partition in common.PARTITIONS_WITH_CARE_MAP:
2014      partition_prop_key = "{}.build.prop".format(partition)
2015      input_file = info_dict[partition_prop_key].input_file
2016      if isinstance(input_file, zipfile.ZipFile):
2017        with zipfile.ZipFile(input_file.filename) as input_zip:
2018          info_dict[partition_prop_key] = \
2019              common.PartitionBuildProps.FromInputFile(input_zip, partition,
2020                                                       placeholder_values)
2021      else:
2022        info_dict[partition_prop_key] = \
2023            common.PartitionBuildProps.FromInputFile(input_file, partition,
2024                                                     placeholder_values)
2025    info_dict["build.prop"] = info_dict["system.build.prop"]
2026
2027    new_build_info = common.BuildInfo(info_dict, build_info.oem_dicts)
2028    device_names.add(new_build_info.device)
2029    fingerprints.add(new_build_info.fingerprint)
2030  return device_names, fingerprints
2031
2032
2033def main(argv):
2034
2035  def option_handler(o, a):
2036    if o in ("-k", "--package_key"):
2037      OPTIONS.package_key = a
2038    elif o in ("-i", "--incremental_from"):
2039      OPTIONS.incremental_source = a
2040    elif o == "--full_radio":
2041      OPTIONS.full_radio = True
2042    elif o == "--full_bootloader":
2043      OPTIONS.full_bootloader = True
2044    elif o == "--wipe_user_data":
2045      OPTIONS.wipe_user_data = True
2046    elif o == "--downgrade":
2047      OPTIONS.downgrade = True
2048      OPTIONS.wipe_user_data = True
2049    elif o == "--override_timestamp":
2050      OPTIONS.downgrade = True
2051    elif o in ("-o", "--oem_settings"):
2052      OPTIONS.oem_source = a.split(',')
2053    elif o == "--oem_no_mount":
2054      OPTIONS.oem_no_mount = True
2055    elif o in ("-e", "--extra_script"):
2056      OPTIONS.extra_script = a
2057    elif o in ("-t", "--worker_threads"):
2058      if a.isdigit():
2059        OPTIONS.worker_threads = int(a)
2060      else:
2061        raise ValueError("Cannot parse value %r for option %r - only "
2062                         "integers are allowed." % (a, o))
2063    elif o in ("-2", "--two_step"):
2064      OPTIONS.two_step = True
2065    elif o == "--include_secondary":
2066      OPTIONS.include_secondary = True
2067    elif o == "--no_signing":
2068      OPTIONS.no_signing = True
2069    elif o == "--verify":
2070      OPTIONS.verify = True
2071    elif o == "--block":
2072      OPTIONS.block_based = True
2073    elif o in ("-b", "--binary"):
2074      OPTIONS.updater_binary = a
2075    elif o == "--stash_threshold":
2076      try:
2077        OPTIONS.stash_threshold = float(a)
2078      except ValueError:
2079        raise ValueError("Cannot parse value %r for option %r - expecting "
2080                         "a float" % (a, o))
2081    elif o == "--log_diff":
2082      OPTIONS.log_diff = a
2083    elif o == "--payload_signer":
2084      OPTIONS.payload_signer = a
2085    elif o == "--payload_signer_args":
2086      OPTIONS.payload_signer_args = shlex.split(a)
2087    elif o == "--payload_signer_maximum_signature_size":
2088      OPTIONS.payload_signer_maximum_signature_size = a
2089    elif o == "--payload_signer_key_size":
2090      # TODO(Xunchang) remove this option after cleaning up the callers.
2091      logger.warning("The option '--payload_signer_key_size' is deprecated."
2092                     " Use '--payload_signer_maximum_signature_size' instead.")
2093      OPTIONS.payload_signer_maximum_signature_size = a
2094    elif o == "--extracted_input_target_files":
2095      OPTIONS.extracted_input = a
2096    elif o == "--skip_postinstall":
2097      OPTIONS.skip_postinstall = True
2098    elif o == "--retrofit_dynamic_partitions":
2099      OPTIONS.retrofit_dynamic_partitions = True
2100    elif o == "--skip_compatibility_check":
2101      OPTIONS.skip_compatibility_check = True
2102    elif o == "--output_metadata_path":
2103      OPTIONS.output_metadata_path = a
2104    elif o == "--disable_fec_computation":
2105      OPTIONS.disable_fec_computation = True
2106    elif o == "--force_non_ab":
2107      OPTIONS.force_non_ab = True
2108    elif o == "--boot_variable_file":
2109      OPTIONS.boot_variable_file = a
2110    else:
2111      return False
2112    return True
2113
2114  args = common.ParseOptions(argv, __doc__,
2115                             extra_opts="b:k:i:d:e:t:2o:",
2116                             extra_long_opts=[
2117                                 "package_key=",
2118                                 "incremental_from=",
2119                                 "full_radio",
2120                                 "full_bootloader",
2121                                 "wipe_user_data",
2122                                 "downgrade",
2123                                 "override_timestamp",
2124                                 "extra_script=",
2125                                 "worker_threads=",
2126                                 "two_step",
2127                                 "include_secondary",
2128                                 "no_signing",
2129                                 "block",
2130                                 "binary=",
2131                                 "oem_settings=",
2132                                 "oem_no_mount",
2133                                 "verify",
2134                                 "stash_threshold=",
2135                                 "log_diff=",
2136                                 "payload_signer=",
2137                                 "payload_signer_args=",
2138                                 "payload_signer_maximum_signature_size=",
2139                                 "payload_signer_key_size=",
2140                                 "extracted_input_target_files=",
2141                                 "skip_postinstall",
2142                                 "retrofit_dynamic_partitions",
2143                                 "skip_compatibility_check",
2144                                 "output_metadata_path=",
2145                                 "disable_fec_computation",
2146                                 "force_non_ab",
2147                                 "boot_variable_file=",
2148                             ], extra_option_handler=option_handler)
2149
2150  if len(args) != 2:
2151    common.Usage(__doc__)
2152    sys.exit(1)
2153
2154  common.InitLogging()
2155
2156  if OPTIONS.downgrade:
2157    # We should only allow downgrading incrementals (as opposed to full).
2158    # Otherwise the device may go back from arbitrary build with this full
2159    # OTA package.
2160    if OPTIONS.incremental_source is None:
2161      raise ValueError("Cannot generate downgradable full OTAs")
2162
2163  # Load the build info dicts from the zip directly or the extracted input
2164  # directory. We don't need to unzip the entire target-files zips, because they
2165  # won't be needed for A/B OTAs (brillo_update_payload does that on its own).
2166  # When loading the info dicts, we don't need to provide the second parameter
2167  # to common.LoadInfoDict(). Specifying the second parameter allows replacing
2168  # some properties with their actual paths, such as 'selinux_fc',
2169  # 'ramdisk_dir', which won't be used during OTA generation.
2170  if OPTIONS.extracted_input is not None:
2171    OPTIONS.info_dict = common.LoadInfoDict(OPTIONS.extracted_input)
2172  else:
2173    with zipfile.ZipFile(args[0], 'r') as input_zip:
2174      OPTIONS.info_dict = common.LoadInfoDict(input_zip)
2175
2176  logger.info("--- target info ---")
2177  common.DumpInfoDict(OPTIONS.info_dict)
2178
2179  # Load the source build dict if applicable.
2180  if OPTIONS.incremental_source is not None:
2181    OPTIONS.target_info_dict = OPTIONS.info_dict
2182    with zipfile.ZipFile(OPTIONS.incremental_source, 'r') as source_zip:
2183      OPTIONS.source_info_dict = common.LoadInfoDict(source_zip)
2184
2185    logger.info("--- source info ---")
2186    common.DumpInfoDict(OPTIONS.source_info_dict)
2187
2188  # Load OEM dicts if provided.
2189  OPTIONS.oem_dicts = _LoadOemDicts(OPTIONS.oem_source)
2190
2191  # Assume retrofitting dynamic partitions when base build does not set
2192  # use_dynamic_partitions but target build does.
2193  if (OPTIONS.source_info_dict and
2194      OPTIONS.source_info_dict.get("use_dynamic_partitions") != "true" and
2195      OPTIONS.target_info_dict.get("use_dynamic_partitions") == "true"):
2196    if OPTIONS.target_info_dict.get("dynamic_partition_retrofit") != "true":
2197      raise common.ExternalError(
2198          "Expect to generate incremental OTA for retrofitting dynamic "
2199          "partitions, but dynamic_partition_retrofit is not set in target "
2200          "build.")
2201    logger.info("Implicitly generating retrofit incremental OTA.")
2202    OPTIONS.retrofit_dynamic_partitions = True
2203
2204  # Skip postinstall for retrofitting dynamic partitions.
2205  if OPTIONS.retrofit_dynamic_partitions:
2206    OPTIONS.skip_postinstall = True
2207
2208  ab_update = OPTIONS.info_dict.get("ab_update") == "true"
2209  allow_non_ab = OPTIONS.info_dict.get("allow_non_ab") == "true"
2210  if OPTIONS.force_non_ab:
2211    assert allow_non_ab, "--force_non_ab only allowed on devices that supports non-A/B"
2212    assert ab_update, "--force_non_ab only allowed on A/B devices"
2213
2214  generate_ab = not OPTIONS.force_non_ab and ab_update
2215
2216  # Use the default key to sign the package if not specified with package_key.
2217  # package_keys are needed on ab_updates, so always define them if an
2218  # A/B update is getting created.
2219  if not OPTIONS.no_signing or generate_ab:
2220    if OPTIONS.package_key is None:
2221      OPTIONS.package_key = OPTIONS.info_dict.get(
2222          "default_system_dev_certificate",
2223          "build/make/target/product/security/testkey")
2224    # Get signing keys
2225    OPTIONS.key_passwords = common.GetKeyPasswords([OPTIONS.package_key])
2226
2227  if generate_ab:
2228    GenerateAbOtaPackage(
2229        target_file=args[0],
2230        output_file=args[1],
2231        source_file=OPTIONS.incremental_source)
2232
2233  else:
2234    GenerateNonAbOtaPackage(
2235        target_file=args[0],
2236        output_file=args[1],
2237        source_file=OPTIONS.incremental_source)
2238
2239  # Post OTA generation works.
2240  if OPTIONS.incremental_source is not None and OPTIONS.log_diff:
2241    logger.info("Generating diff logs...")
2242    logger.info("Unzipping target-files for diffing...")
2243    target_dir = common.UnzipTemp(args[0], TARGET_DIFFING_UNZIP_PATTERN)
2244    source_dir = common.UnzipTemp(
2245        OPTIONS.incremental_source, TARGET_DIFFING_UNZIP_PATTERN)
2246
2247    with open(OPTIONS.log_diff, 'w') as out_file:
2248      import target_files_diff
2249      target_files_diff.recursiveDiff(
2250          '', source_dir, target_dir, out_file)
2251
2252  logger.info("done.")
2253
2254
2255if __name__ == '__main__':
2256  try:
2257    common.CloseInheritedPipes()
2258    main(sys.argv[1:])
2259  except common.ExternalError:
2260    logger.exception("\n   ERROR:\n")
2261    sys.exit(1)
2262  finally:
2263    common.Cleanup()
2264