• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1#!/usr/bin/env python
2#
3# Copyright (C) 2008 The Android Open Source Project
4#
5# Licensed under the Apache License, Version 2.0 (the "License");
6# you may not use this file except in compliance with the License.
7# You may obtain a copy of the License at
8#
9#      http://www.apache.org/licenses/LICENSE-2.0
10#
11# Unless required by applicable law or agreed to in writing, software
12# distributed under the License is distributed on an "AS IS" BASIS,
13# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14# See the License for the specific language governing permissions and
15# limitations under the License.
16
17"""
18Given a target-files zipfile, produces an OTA package that installs that build.
19An incremental OTA is produced if -i is given, otherwise a full OTA is produced.
20
21Usage:  ota_from_target_files [options] input_target_files output_ota_package
22
23Common options that apply to both of non-A/B and A/B OTAs
24
25  --downgrade
26      Intentionally generate an incremental OTA that updates from a newer build
27      to an older one (e.g. downgrading from P preview back to O MR1).
28      "ota-downgrade=yes" will be set in the package metadata file. A data wipe
29      will always be enforced when using this flag, so "ota-wipe=yes" will also
30      be included in the metadata file. The update-binary in the source build
31      will be used in the OTA package, unless --binary flag is specified. Please
32      also check the comment for --override_timestamp below.
33
34  -i  (--incremental_from) <file>
35      Generate an incremental OTA using the given target-files zip as the
36      starting build.
37
38  -k  (--package_key) <key>
39      Key to use to sign the package (default is the value of
40      default_system_dev_certificate from the input target-files's
41      META/misc_info.txt, or "build/make/target/product/security/testkey" if
42      that value is not specified).
43
44      For incremental OTAs, the default value is based on the source
45      target-file, not the target build.
46
47  --override_timestamp
48      Intentionally generate an incremental OTA that updates from a newer build
49      to an older one (based on timestamp comparison), by setting the downgrade
50      flag in the package metadata. This differs from --downgrade flag, as we
51      don't enforce a data wipe with this flag. Because we know for sure this is
52      NOT an actual downgrade case, but two builds happen to be cut in a reverse
53      order (e.g. from two branches). A legit use case is that we cut a new
54      build C (after having A and B), but want to enfore an update path of A ->
55      C -> B. Specifying --downgrade may not help since that would enforce a
56      data wipe for C -> B update.
57
58      We used to set a fake timestamp in the package metadata for this flow. But
59      now we consolidate the two cases (i.e. an actual downgrade, or a downgrade
60      based on timestamp) with the same "ota-downgrade=yes" flag, with the
61      difference being whether "ota-wipe=yes" is set.
62
63  --wipe_user_data
64      Generate an OTA package that will wipe the user data partition when
65      installed.
66
67  --retrofit_dynamic_partitions
68      Generates an OTA package that updates a device to support dynamic
69      partitions (default False). This flag is implied when generating
70      an incremental OTA where the base build does not support dynamic
71      partitions but the target build does. For A/B, when this flag is set,
72      --skip_postinstall is implied.
73
74  --skip_compatibility_check
75      Skip checking compatibility of the input target files package.
76
77  --output_metadata_path
78      Write a copy of the metadata to a separate file. Therefore, users can
79      read the post build fingerprint without extracting the OTA package.
80
81  --force_non_ab
82      This flag can only be set on an A/B device that also supports non-A/B
83      updates. Implies --two_step.
84      If set, generate that non-A/B update package.
85      If not set, generates A/B package for A/B device and non-A/B package for
86      non-A/B device.
87
88  -o  (--oem_settings) <main_file[,additional_files...]>
89      Comma separated list of files used to specify the expected OEM-specific
90      properties on the OEM partition of the intended device. Multiple expected
91      values can be used by providing multiple files. Only the first dict will
92      be used to compute fingerprint, while the rest will be used to assert
93      OEM-specific properties.
94
95Non-A/B OTA specific options
96
97  -b  (--binary) <file>
98      Use the given binary as the update-binary in the output package, instead
99      of the binary in the build's target_files. Use for development only.
100
101  --block
102      Generate a block-based OTA for non-A/B device. We have deprecated the
103      support for file-based OTA since O. Block-based OTA will be used by
104      default for all non-A/B devices. Keeping this flag here to not break
105      existing callers.
106
107  -e  (--extra_script) <file>
108      Insert the contents of file at the end of the update script.
109
110  --full_bootloader
111      Similar to --full_radio. When generating an incremental OTA, always
112      include a full copy of bootloader image.
113
114  --full_radio
115      When generating an incremental OTA, always include a full copy of radio
116      image. This option is only meaningful when -i is specified, because a full
117      radio is always included in a full OTA if applicable.
118
119  --log_diff <file>
120      Generate a log file that shows the differences in the source and target
121      builds for an incremental package. This option is only meaningful when -i
122      is specified.
123
124  --oem_no_mount
125      For devices with OEM-specific properties but without an OEM partition, do
126      not mount the OEM partition in the updater-script. This should be very
127      rarely used, since it's expected to have a dedicated OEM partition for
128      OEM-specific properties. Only meaningful when -o is specified.
129
130  --stash_threshold <float>
131      Specify the threshold that will be used to compute the maximum allowed
132      stash size (defaults to 0.8).
133
134  -t  (--worker_threads) <int>
135      Specify the number of worker-threads that will be used when generating
136      patches for incremental updates (defaults to 3).
137
138  --verify
139      Verify the checksums of the updated system and vendor (if any) partitions.
140      Non-A/B incremental OTAs only.
141
142  -2  (--two_step)
143      Generate a 'two-step' OTA package, where recovery is updated first, so
144      that any changes made to the system partition are done using the new
145      recovery (new kernel, etc.).
146
147A/B OTA specific options
148
149  --disable_fec_computation
150      Disable the on device FEC data computation for incremental updates. OTA will be larger but installation will be faster.
151
152  --include_secondary
153      Additionally include the payload for secondary slot images (default:
154      False). Only meaningful when generating A/B OTAs.
155
156      By default, an A/B OTA package doesn't contain the images for the
157      secondary slot (e.g. system_other.img). Specifying this flag allows
158      generating a separate payload that will install secondary slot images.
159
160      Such a package needs to be applied in a two-stage manner, with a reboot
161      in-between. During the first stage, the updater applies the primary
162      payload only. Upon finishing, it reboots the device into the newly updated
163      slot. It then continues to install the secondary payload to the inactive
164      slot, but without switching the active slot at the end (needs the matching
165      support in update_engine, i.e. SWITCH_SLOT_ON_REBOOT flag).
166
167      Due to the special install procedure, the secondary payload will be always
168      generated as a full payload.
169
170  --payload_signer <signer>
171      Specify the signer when signing the payload and metadata for A/B OTAs.
172      By default (i.e. without this flag), it calls 'openssl pkeyutl' to sign
173      with the package private key. If the private key cannot be accessed
174      directly, a payload signer that knows how to do that should be specified.
175      The signer will be supplied with "-inkey <path_to_key>",
176      "-in <input_file>" and "-out <output_file>" parameters.
177
178  --payload_signer_args <args>
179      Specify the arguments needed for payload signer.
180
181  --payload_signer_maximum_signature_size <signature_size>
182      The maximum signature size (in bytes) that would be generated by the given
183      payload signer. Only meaningful when custom payload signer is specified
184      via '--payload_signer'.
185      If the signer uses a RSA key, this should be the number of bytes to
186      represent the modulus. If it uses an EC key, this is the size of a
187      DER-encoded ECDSA signature.
188
189  --payload_signer_key_size <key_size>
190      Deprecated. Use the '--payload_signer_maximum_signature_size' instead.
191
192  --boot_variable_file <path>
193      A file that contains the possible values of ro.boot.* properties. It's
194      used to calculate the possible runtime fingerprints when some
195      ro.product.* properties are overridden by the 'import' statement.
196      The file expects one property per line, and each line has the following
197      format: 'prop_name=value1,value2'. e.g. 'ro.boot.product.sku=std,pro'
198      The path specified can either be relative to the current working directory
199      or the path to a file inside of input_target_files.
200
201  --skip_postinstall
202      Skip the postinstall hooks when generating an A/B OTA package (default:
203      False). Note that this discards ALL the hooks, including non-optional
204      ones. Should only be used if caller knows it's safe to do so (e.g. all the
205      postinstall work is to dexopt apps and a data wipe will happen immediately
206      after). Only meaningful when generating A/B OTAs.
207
208  --partial "<PARTITION> [<PARTITION>[...]]"
209      Generate partial updates, overriding ab_partitions list with the given
210      list. Specify --partial= without partition list to let tooling auto detect
211      partial partition list.
212
213  --custom_image <custom_partition=custom_image>
214      Use the specified custom_image to update custom_partition when generating
215      an A/B OTA package. e.g. "--custom_image oem=oem.img --custom_image
216      cus=cus_test.img"
217
218  --disable_vabc
219      Disable Virtual A/B Compression, for builds that have compression enabled
220      by default.
221
222  --vabc_downgrade
223      Don't disable Virtual A/B Compression for downgrading OTAs.
224      For VABC downgrades, we must finish merging before doing data wipe, and
225      since data wipe is required for downgrading OTA, this might cause long
226      wait time in recovery.
227
228  --enable_vabc_xor
229      Enable the VABC xor feature. Will reduce space requirements for OTA, but OTA installation will be slower.
230
231  --force_minor_version
232      Override the update_engine minor version for delta generation.
233
234  --compressor_types
235      A colon ':' separated list of compressors. Allowed values are bz2 and brotli.
236
237  --enable_zucchini
238      Whether to enable to zucchini feature. Will generate smaller OTA but uses more memory, OTA generation will take longer.
239
240  --enable_puffdiff
241      Whether to enable to puffdiff feature. Will generate smaller OTA but uses more memory, OTA generation will take longer.
242
243  --enable_lz4diff
244      Whether to enable lz4diff feature. Will generate smaller OTA for EROFS but
245      uses more memory.
246
247  --spl_downgrade
248      Force generate an SPL downgrade OTA. Only needed if target build has an
249      older SPL.
250
251  --vabc_compression_param
252      Compression algorithm to be used for VABC. Available options: gz, lz4, zstd, brotli, none.
253      Compression level can be specified by appending ",$LEVEL" to option.
254      e.g. --vabc_compression_param=gz,9 specifies level 9 compression with gz algorithm
255
256  --security_patch_level
257      Override the security patch level in target files
258
259  --max_threads
260      Specify max number of threads allowed when generating A/B OTA
261
262  --vabc_cow_version
263      Specify the VABC cow version to be used
264
265  --compression_factor
266      Specify the maximum block size to be compressed at once during OTA. supported options: 4k, 8k, 16k, 32k, 64k, 128k, 256k
267"""
268
269from __future__ import print_function
270
271import logging
272import multiprocessing
273import os
274import os.path
275import re
276import shutil
277import subprocess
278import sys
279import zipfile
280
281import care_map_pb2
282import common
283import ota_utils
284import payload_signer
285from ota_utils import (VABC_COMPRESSION_PARAM_SUPPORT, FinalizeMetadata, GetPackageMetadata,
286                       PayloadGenerator, SECURITY_PATCH_LEVEL_PROP_NAME, ExtractTargetFiles, CopyTargetFilesDir)
287from common import DoesInputFileContain, IsSparseImage
288import target_files_diff
289from non_ab_ota import GenerateNonAbOtaPackage
290from payload_signer import PayloadSigner
291
292if sys.hexversion < 0x02070000:
293  print("Python 2.7 or newer is required.", file=sys.stderr)
294  sys.exit(1)
295
296logger = logging.getLogger(__name__)
297
298OPTIONS = ota_utils.OPTIONS
299OPTIONS.verify = False
300OPTIONS.patch_threshold = 0.95
301OPTIONS.wipe_user_data = False
302OPTIONS.extra_script = None
303OPTIONS.worker_threads = multiprocessing.cpu_count() // 2
304if OPTIONS.worker_threads == 0:
305  OPTIONS.worker_threads = 1
306OPTIONS.two_step = False
307OPTIONS.include_secondary = False
308OPTIONS.block_based = True
309OPTIONS.updater_binary = None
310OPTIONS.oem_dicts = None
311OPTIONS.oem_source = None
312OPTIONS.oem_no_mount = False
313OPTIONS.full_radio = False
314OPTIONS.full_bootloader = False
315# Stash size cannot exceed cache_size * threshold.
316OPTIONS.cache_size = None
317OPTIONS.stash_threshold = 0.8
318OPTIONS.log_diff = None
319OPTIONS.extracted_input = None
320OPTIONS.skip_postinstall = False
321OPTIONS.skip_compatibility_check = False
322OPTIONS.disable_fec_computation = False
323OPTIONS.disable_verity_computation = False
324OPTIONS.partial = None
325OPTIONS.custom_images = {}
326OPTIONS.disable_vabc = False
327OPTIONS.spl_downgrade = False
328OPTIONS.vabc_downgrade = False
329OPTIONS.enable_vabc_xor = True
330OPTIONS.force_minor_version = None
331OPTIONS.compressor_types = None
332OPTIONS.enable_zucchini = False
333OPTIONS.enable_puffdiff = None
334OPTIONS.enable_lz4diff = False
335OPTIONS.vabc_compression_param = None
336OPTIONS.security_patch_level = None
337OPTIONS.max_threads = None
338OPTIONS.vabc_cow_version = None
339OPTIONS.compression_factor = None
340
341
342POSTINSTALL_CONFIG = 'META/postinstall_config.txt'
343DYNAMIC_PARTITION_INFO = 'META/dynamic_partitions_info.txt'
344MISC_INFO = 'META/misc_info.txt'
345AB_PARTITIONS = 'META/ab_partitions.txt'
346
347# Files to be unzipped for target diffing purpose.
348TARGET_DIFFING_UNZIP_PATTERN = ['BOOT', 'RECOVERY', 'SYSTEM/*', 'VENDOR/*',
349                                'PRODUCT/*', 'SYSTEM_EXT/*', 'ODM/*',
350                                'VENDOR_DLKM/*', 'ODM_DLKM/*', 'SYSTEM_DLKM/*']
351RETROFIT_DAP_UNZIP_PATTERN = ['OTA/super_*.img', AB_PARTITIONS]
352
353# Images to be excluded from secondary payload. We essentially only keep
354# 'system_other' and bootloader partitions.
355SECONDARY_PAYLOAD_SKIPPED_IMAGES = [
356    'boot', 'dtbo', 'modem', 'odm', 'odm_dlkm', 'product', 'radio', 'recovery',
357    'system_dlkm', 'system_ext', 'vbmeta', 'vbmeta_system', 'vbmeta_vendor',
358    'vendor', 'vendor_boot']
359
360
361def _LoadOemDicts(oem_source):
362  """Returns the list of loaded OEM properties dict."""
363  if not oem_source:
364    return None
365
366  oem_dicts = []
367  for oem_file in oem_source:
368    oem_dicts.append(common.LoadDictionaryFromFile(oem_file))
369  return oem_dicts
370
371def ModifyKeyvalueList(content: str, key: str, value: str):
372  """ Update update the key value list with specified key and value
373  Args:
374    content: The string content of dynamic_partitions_info.txt. Each line
375      should be a key valur pair, where string before the first '=' are keys,
376      remaining parts are values.
377    key: the key of the key value pair to modify
378    value: the new value to replace with
379
380  Returns:
381    Updated content of the key value list
382  """
383  output_list = []
384  for line in content.splitlines():
385    if line.startswith(key+"="):
386      continue
387    output_list.append(line)
388  output_list.append("{}={}".format(key, value))
389  return "\n".join(output_list)
390
391def ModifyVABCCompressionParam(content, algo):
392  """ Update update VABC Compression Param in dynamic_partitions_info.txt
393  Args:
394    content: The string content of dynamic_partitions_info.txt
395    algo: The compression algorithm should be used for VABC. See
396          https://cs.android.com/android/platform/superproject/+/master:system/core/fs_mgr/libsnapshot/cow_writer.cpp;l=127;bpv=1;bpt=1?q=CowWriter::ParseOptions&sq=
397  Returns:
398    Updated content of dynamic_partitions_info.txt , with custom compression algo
399  """
400  return ModifyKeyvalueList(content, "virtual_ab_compression_method", algo)
401
402
403def UpdatesInfoForSpecialUpdates(content, partitions_filter,
404                                 delete_keys=None):
405  """ Updates info file for secondary payload generation, partial update, etc.
406
407    Scan each line in the info file, and remove the unwanted partitions from
408    the dynamic partition list in the related properties. e.g.
409    "super_google_dynamic_partitions_partition_list=system vendor product"
410    will become "super_google_dynamic_partitions_partition_list=system".
411
412  Args:
413    content: The content of the input info file. e.g. misc_info.txt.
414    partitions_filter: A function to filter the desired partitions from a given
415      list
416    delete_keys: A list of keys to delete in the info file
417
418  Returns:
419    A string of the updated info content.
420  """
421
422  output_list = []
423  # The suffix in partition_list variables that follows the name of the
424  # partition group.
425  list_suffix = 'partition_list'
426  for line in content.splitlines():
427    if line.startswith('#') or '=' not in line:
428      output_list.append(line)
429      continue
430    key, value = line.strip().split('=', 1)
431
432    if delete_keys and key in delete_keys:
433      pass
434    elif key.endswith(list_suffix):
435      partitions = value.split()
436      # TODO for partial update, partitions in the same group must be all
437      # updated or all omitted
438      partitions = filter(partitions_filter, partitions)
439      output_list.append('{}={}'.format(key, ' '.join(partitions)))
440    else:
441      output_list.append(line)
442  return '\n'.join(output_list)
443
444
445def GetTargetFilesZipForSecondaryImages(input_file, skip_postinstall=False):
446  """Returns a target-files.zip file for generating secondary payload.
447
448  Although the original target-files.zip already contains secondary slot
449  images (i.e. IMAGES/system_other.img), we need to rename the files to the
450  ones without _other suffix. Note that we cannot instead modify the names in
451  META/ab_partitions.txt, because there are no matching partitions on device.
452
453  For the partitions that don't have secondary images, the ones for primary
454  slot will be used. This is to ensure that we always have valid boot, vbmeta,
455  bootloader images in the inactive slot.
456
457  After writing system_other to inactive slot's system partiiton,
458  PackageManagerService will read `ro.cp_system_other_odex`, and set
459  `sys.cppreopt` to "requested". Then, according to
460  system/extras/cppreopts/cppreopts.rc , init will mount system_other at
461  /postinstall, and execute `cppreopts` to copy optimized APKs from
462  /postinstall to /data .
463
464  Args:
465    input_file: The input target-files.zip file.
466    skip_postinstall: Whether to skip copying the postinstall config file.
467
468  Returns:
469    The filename of the target-files.zip for generating secondary payload.
470  """
471
472  def GetInfoForSecondaryImages(info_file):
473    """Updates info file for secondary payload generation."""
474    with open(info_file) as f:
475      content = f.read()
476    # Remove virtual_ab flag from secondary payload so that OTA client
477    # don't use snapshots for secondary update
478    delete_keys = ['virtual_ab', "virtual_ab_retrofit"]
479    return UpdatesInfoForSpecialUpdates(
480        content, lambda p: p not in SECONDARY_PAYLOAD_SKIPPED_IMAGES,
481        delete_keys)
482
483  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
484  target_zip = zipfile.ZipFile(target_file, 'w', allowZip64=True)
485
486  fileslist = []
487  for (root, dirs, files) in os.walk(input_file):
488    root = root.lstrip(input_file).lstrip("/")
489    fileslist.extend([os.path.join(root, d) for d in dirs])
490    fileslist.extend([os.path.join(root, d) for d in files])
491
492  input_tmp = input_file
493  for filename in fileslist:
494    unzipped_file = os.path.join(input_tmp, *filename.split('/'))
495    if filename == 'IMAGES/system_other.img':
496      common.ZipWrite(target_zip, unzipped_file, arcname='IMAGES/system.img')
497
498    # Primary images and friends need to be skipped explicitly.
499    elif filename in ('IMAGES/system.img',
500                      'IMAGES/system.map'):
501      pass
502
503    # Copy images that are not in SECONDARY_PAYLOAD_SKIPPED_IMAGES.
504    elif filename.startswith(('IMAGES/', 'RADIO/')):
505      image_name = os.path.basename(filename)
506      if image_name not in ['{}.img'.format(partition) for partition in
507                            SECONDARY_PAYLOAD_SKIPPED_IMAGES]:
508        common.ZipWrite(target_zip, unzipped_file, arcname=filename)
509
510    # Skip copying the postinstall config if requested.
511    elif skip_postinstall and filename == POSTINSTALL_CONFIG:
512      pass
513
514    elif filename.startswith('META/'):
515      # Remove the unnecessary partitions for secondary images from the
516      # ab_partitions file.
517      if filename == AB_PARTITIONS:
518        with open(unzipped_file) as f:
519          partition_list = f.read().splitlines()
520        partition_list = [partition for partition in partition_list if partition
521                          and partition not in SECONDARY_PAYLOAD_SKIPPED_IMAGES]
522        common.ZipWriteStr(target_zip, filename,
523                           '\n'.join(partition_list))
524      # Remove the unnecessary partitions from the dynamic partitions list.
525      elif (filename == 'META/misc_info.txt' or
526            filename == DYNAMIC_PARTITION_INFO):
527        modified_info = GetInfoForSecondaryImages(unzipped_file)
528        common.ZipWriteStr(target_zip, filename, modified_info)
529      else:
530        common.ZipWrite(target_zip, unzipped_file, arcname=filename)
531
532  common.ZipClose(target_zip)
533
534  return target_file
535
536
537def GetTargetFilesZipWithoutPostinstallConfig(input_file):
538  """Returns a target-files.zip that's not containing postinstall_config.txt.
539
540  This allows brillo_update_payload script to skip writing all the postinstall
541  hooks in the generated payload. The input target-files.zip file will be
542  duplicated, with 'META/postinstall_config.txt' skipped. If input_file doesn't
543  contain the postinstall_config.txt entry, the input file will be returned.
544
545  Args:
546    input_file: The input target-files.zip filename.
547
548  Returns:
549    The filename of target-files.zip that doesn't contain postinstall config.
550  """
551  config_path = os.path.join(input_file, POSTINSTALL_CONFIG)
552  if os.path.exists(config_path):
553    os.unlink(config_path)
554  return input_file
555
556
557def ParseInfoDict(target_file_path):
558  return common.LoadInfoDict(target_file_path)
559
560def ModifyTargetFilesDynamicPartitionInfo(input_file, key, value):
561  """Returns a target-files.zip with a custom VABC compression param.
562  Args:
563    input_file: The input target-files.zip path
564    vabc_compression_param: Custom Virtual AB Compression algorithm
565
566  Returns:
567    The path to modified target-files.zip
568  """
569  if os.path.isdir(input_file):
570    dynamic_partition_info_path = os.path.join(
571        input_file, *DYNAMIC_PARTITION_INFO.split("/"))
572    with open(dynamic_partition_info_path, "r") as fp:
573      dynamic_partition_info = fp.read()
574    dynamic_partition_info = ModifyKeyvalueList(
575        dynamic_partition_info, key, value)
576    with open(dynamic_partition_info_path, "w") as fp:
577      fp.write(dynamic_partition_info)
578    return input_file
579
580  target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
581  shutil.copyfile(input_file, target_file)
582  common.ZipDelete(target_file, DYNAMIC_PARTITION_INFO)
583  with zipfile.ZipFile(input_file, 'r', allowZip64=True) as zfp:
584    dynamic_partition_info = zfp.read(DYNAMIC_PARTITION_INFO).decode()
585    dynamic_partition_info = ModifyKeyvalueList(
586        dynamic_partition_info, key, value)
587    with zipfile.ZipFile(target_file, "a", allowZip64=True) as output_zip:
588      output_zip.writestr(DYNAMIC_PARTITION_INFO, dynamic_partition_info)
589  return target_file
590
591def GetTargetFilesZipForCustomVABCCompression(input_file, vabc_compression_param):
592  """Returns a target-files.zip with a custom VABC compression param.
593  Args:
594    input_file: The input target-files.zip path
595    vabc_compression_param: Custom Virtual AB Compression algorithm
596
597  Returns:
598    The path to modified target-files.zip
599  """
600  return ModifyTargetFilesDynamicPartitionInfo(input_file, "virtual_ab_compression_method", vabc_compression_param)
601
602
603def GetTargetFilesZipForPartialUpdates(input_file, ab_partitions):
604  """Returns a target-files.zip for partial ota update package generation.
605
606  This function modifies ab_partitions list with the desired partitions before
607  calling the brillo_update_payload script. It also cleans up the reference to
608  the excluded partitions in the info file, e.g. misc_info.txt.
609
610  Args:
611    input_file: The input target-files.zip filename.
612    ab_partitions: A list of partitions to include in the partial update
613
614  Returns:
615    The filename of target-files.zip used for partial ota update.
616  """
617
618  original_ab_partitions = common.ReadFromInputFile(input_file, AB_PARTITIONS)
619
620  unrecognized_partitions = [partition for partition in ab_partitions if
621                             partition not in original_ab_partitions]
622  if unrecognized_partitions:
623    raise ValueError("Unrecognized partitions when generating partial updates",
624                     unrecognized_partitions)
625
626  logger.info("Generating partial updates for %s", ab_partitions)
627  for subdir in ["IMAGES", "RADIO", "PREBUILT_IMAGES"]:
628    image_dir = os.path.join(subdir)
629    if not os.path.exists(image_dir):
630      continue
631    for filename in os.listdir(image_dir):
632      filepath = os.path.join(image_dir, filename)
633      if filename.endswith(".img"):
634        partition_name = filename.removesuffix(".img")
635        if partition_name not in ab_partitions:
636          os.unlink(filepath)
637
638  common.WriteToInputFile(input_file, 'META/ab_partitions.txt',
639                          '\n'.join(ab_partitions))
640  CARE_MAP_ENTRY = "META/care_map.pb"
641  if DoesInputFileContain(input_file, CARE_MAP_ENTRY):
642    caremap = care_map_pb2.CareMap()
643    caremap.ParseFromString(
644        common.ReadBytesFromInputFile(input_file, CARE_MAP_ENTRY))
645    filtered = [
646        part for part in caremap.partitions if part.name in ab_partitions]
647    del caremap.partitions[:]
648    caremap.partitions.extend(filtered)
649    common.WriteBytesToInputFile(input_file, CARE_MAP_ENTRY,
650                                 caremap.SerializeToString())
651
652  for info_file in ['META/misc_info.txt', DYNAMIC_PARTITION_INFO]:
653    if not DoesInputFileContain(input_file, info_file):
654      logger.warning('Cannot find %s in input zipfile', info_file)
655      continue
656
657    content = common.ReadFromInputFile(input_file, info_file)
658    modified_info = UpdatesInfoForSpecialUpdates(
659        content, lambda p: p in ab_partitions)
660    if OPTIONS.vabc_compression_param and info_file == DYNAMIC_PARTITION_INFO:
661      modified_info = ModifyVABCCompressionParam(
662          modified_info, OPTIONS.vabc_compression_param)
663    common.WriteToInputFile(input_file, info_file, modified_info)
664
665  def IsInPartialList(postinstall_line: str):
666    idx = postinstall_line.find("=")
667    if idx < 0:
668      return False
669    key = postinstall_line[:idx]
670    logger.info("%s %s", key, ab_partitions)
671    for part in ab_partitions:
672      if key.endswith("_" + part):
673        return True
674    return False
675
676  if common.DoesInputFileContain(input_file, POSTINSTALL_CONFIG):
677    postinstall_config = common.ReadFromInputFile(
678        input_file, POSTINSTALL_CONFIG)
679    postinstall_config = [
680        line for line in postinstall_config.splitlines() if IsInPartialList(line)]
681    if postinstall_config:
682      postinstall_config = "\n".join(postinstall_config)
683      common.WriteToInputFile(
684          input_file, POSTINSTALL_CONFIG, postinstall_config)
685    else:
686      os.unlink(os.path.join(input_file, POSTINSTALL_CONFIG))
687
688  return input_file
689
690
691def GetTargetFilesZipForRetrofitDynamicPartitions(input_file,
692                                                  super_block_devices,
693                                                  dynamic_partition_list):
694  """Returns a target-files.zip for retrofitting dynamic partitions.
695
696  This allows brillo_update_payload to generate an OTA based on the exact
697  bits on the block devices. Postinstall is disabled.
698
699  Args:
700    input_file: The input target-files.zip filename.
701    super_block_devices: The list of super block devices
702    dynamic_partition_list: The list of dynamic partitions
703
704  Returns:
705    The filename of target-files.zip with *.img replaced with super_*.img for
706    each block device in super_block_devices.
707  """
708  assert super_block_devices, "No super_block_devices are specified."
709
710  replace = {'OTA/super_{}.img'.format(dev): 'IMAGES/{}.img'.format(dev)
711             for dev in super_block_devices}
712
713  # Remove partitions from META/ab_partitions.txt that is in
714  # dynamic_partition_list but not in super_block_devices so that
715  # brillo_update_payload won't generate update for those logical partitions.
716  ab_partitions_lines = common.ReadFromInputFile(
717      input_file, AB_PARTITIONS).split("\n")
718  ab_partitions = [line.strip() for line in ab_partitions_lines]
719  # Assert that all super_block_devices are in ab_partitions
720  super_device_not_updated = [partition for partition in super_block_devices
721                              if partition not in ab_partitions]
722  assert not super_device_not_updated, \
723      "{} is in super_block_devices but not in {}".format(
724          super_device_not_updated, AB_PARTITIONS)
725  # ab_partitions -= (dynamic_partition_list - super_block_devices)
726  to_delete = [AB_PARTITIONS]
727
728  # Always skip postinstall for a retrofit update.
729  to_delete += [POSTINSTALL_CONFIG]
730
731  # Delete dynamic_partitions_info.txt so that brillo_update_payload thinks this
732  # is a regular update on devices without dynamic partitions support.
733  to_delete += [DYNAMIC_PARTITION_INFO]
734
735  # Remove the existing partition images as well as the map files.
736  to_delete += list(replace.values())
737  to_delete += ['IMAGES/{}.map'.format(dev) for dev in super_block_devices]
738  for item in to_delete:
739    os.unlink(os.path.join(input_file, item))
740
741  # Write super_{foo}.img as {foo}.img.
742  for src, dst in replace.items():
743    assert DoesInputFileContain(input_file, src), \
744        'Missing {} in {}; {} cannot be written'.format(src, input_file, dst)
745    source_path = os.path.join(input_file, *src.split("/"))
746    target_path = os.path.join(input_file, *dst.split("/"))
747    os.rename(source_path, target_path)
748
749  # Write new ab_partitions.txt file
750  new_ab_partitions = os.path.join(input_file, AB_PARTITIONS)
751  with open(new_ab_partitions, 'w') as f:
752    for partition in ab_partitions:
753      if (partition in dynamic_partition_list and
754              partition not in super_block_devices):
755        logger.info("Dropping %s from ab_partitions.txt", partition)
756        continue
757      f.write(partition + "\n")
758
759  return input_file
760
761
762def GetTargetFilesZipForCustomImagesUpdates(input_file, custom_images: dict):
763  """Returns a target-files.zip for custom partitions update.
764
765  This function modifies ab_partitions list with the desired custom partitions
766  and puts the custom images into the target target-files.zip.
767
768  Args:
769    input_file: The input target-files extracted directory
770    custom_images: A map of custom partitions and custom images.
771
772  Returns:
773    The extracted dir of a target-files.zip which has renamed the custom images
774    in the IMAGES/ to their partition names.
775  """
776  for custom_image in custom_images.values():
777    if not os.path.exists(os.path.join(input_file, "IMAGES", custom_image)):
778      raise ValueError("Specified custom image {} not found in target files {}, available images are {}",
779                       custom_image, input_file, os.listdir(os.path.join(input_file, "IMAGES")))
780
781  for custom_partition, custom_image in custom_images.items():
782    default_custom_image = '{}.img'.format(custom_partition)
783    if default_custom_image != custom_image:
784      src = os.path.join(input_file, 'IMAGES', custom_image)
785      dst = os.path.join(input_file, 'IMAGES', default_custom_image)
786      os.rename(src, dst)
787
788  return input_file
789
790
791def GeneratePartitionTimestampFlags(partition_state):
792  partition_timestamps = [
793      part.partition_name + ":" + part.version
794      for part in partition_state]
795  return ["--partition_timestamps", ",".join(partition_timestamps)]
796
797
798def GeneratePartitionTimestampFlagsDowngrade(
799        pre_partition_state, post_partition_state):
800  assert pre_partition_state is not None
801  partition_timestamps = {}
802  for part in post_partition_state:
803    partition_timestamps[part.partition_name] = part.version
804  for part in pre_partition_state:
805    if part.partition_name in partition_timestamps:
806      partition_timestamps[part.partition_name] = \
807          max(part.version, partition_timestamps[part.partition_name])
808  return [
809      "--partition_timestamps",
810      ",".join([key + ":" + val for (key, val)
811                in partition_timestamps.items()])
812  ]
813
814
815def SupportsMainlineGkiUpdates(target_file):
816  """Return True if the build supports MainlineGKIUpdates.
817
818  This function scans the product.img file in IMAGES/ directory for
819  pattern |*/apex/com.android.gki.*.apex|. If there are files
820  matching this pattern, conclude that build supports mainline
821  GKI and return True
822
823  Args:
824    target_file: Path to a target_file.zip, or an extracted directory
825  Return:
826    True if thisb uild supports Mainline GKI Updates.
827  """
828  if target_file is None:
829    return False
830  if os.path.isfile(target_file):
831    target_file = common.UnzipTemp(target_file, ["IMAGES/product.img"])
832  if not os.path.isdir(target_file):
833    assert os.path.isdir(target_file), \
834        "{} must be a path to zip archive or dir containing extracted"\
835        " target_files".format(target_file)
836  image_file = os.path.join(target_file, "IMAGES", "product.img")
837
838  if not os.path.isfile(image_file):
839    return False
840
841  if IsSparseImage(image_file):
842    # Unsparse the image
843    tmp_img = common.MakeTempFile(suffix=".img")
844    subprocess.check_output(["simg2img", image_file, tmp_img])
845    image_file = tmp_img
846
847  cmd = ["debugfs_static", "-R", "ls -p /apex", image_file]
848  output = subprocess.check_output(cmd).decode()
849
850  pattern = re.compile(r"com\.android\.gki\..*\.apex")
851  return pattern.search(output) is not None
852
853
854def ExtractOrCopyTargetFiles(target_file):
855  if os.path.isdir(target_file):
856    return CopyTargetFilesDir(target_file)
857  else:
858    return ExtractTargetFiles(target_file)
859
860
861def ValidateCompressionParam(target_info):
862  vabc_compression_param = OPTIONS.vabc_compression_param
863  if vabc_compression_param:
864    minimum_api_level_required = VABC_COMPRESSION_PARAM_SUPPORT[vabc_compression_param.split(",")[0]]
865    if target_info.vendor_api_level < minimum_api_level_required:
866      raise ValueError("Specified VABC compression param {} is only supported for API level >= {}, device is on API level {}".format(
867          vabc_compression_param, minimum_api_level_required, target_info.vendor_api_level))
868
869
870def GenerateAbOtaPackage(target_file, output_file, source_file=None):
871  """Generates an Android OTA package that has A/B update payload."""
872  # If input target_files are directories, create a copy so that we can modify
873  # them directly
874  target_info = common.BuildInfo(OPTIONS.info_dict, OPTIONS.oem_dicts)
875  if OPTIONS.disable_vabc and target_info.is_release_key:
876    raise ValueError("Disabling VABC on release-key builds is not supported.")
877
878  ValidateCompressionParam(target_info)
879  vabc_compression_param = target_info.vabc_compression_param
880
881  target_file = ExtractOrCopyTargetFiles(target_file)
882  if source_file is not None:
883    source_file = ExtractOrCopyTargetFiles(source_file)
884  # Stage the output zip package for package signing.
885  if not OPTIONS.no_signing:
886    staging_file = common.MakeTempFile(suffix='.zip')
887  else:
888    staging_file = output_file
889  output_zip = zipfile.ZipFile(staging_file, "w",
890                               compression=zipfile.ZIP_DEFLATED,
891                               allowZip64=True)
892
893  if source_file is not None:
894    source_file = ExtractTargetFiles(source_file)
895    assert "ab_partitions" in OPTIONS.source_info_dict, \
896        "META/ab_partitions.txt is required for ab_update."
897    assert "ab_partitions" in OPTIONS.target_info_dict, \
898        "META/ab_partitions.txt is required for ab_update."
899    target_info = common.BuildInfo(OPTIONS.target_info_dict, OPTIONS.oem_dicts)
900    source_info = common.BuildInfo(OPTIONS.source_info_dict, OPTIONS.oem_dicts)
901    # If source supports VABC, delta_generator/update_engine will attempt to
902    # use VABC. This dangerous, as the target build won't have snapuserd to
903    # serve I/O request when device boots. Therefore, disable VABC if source
904    # build doesn't supports it.
905    if not source_info.is_vabc or not target_info.is_vabc:
906      logger.info("Either source or target does not support VABC, disabling.")
907      OPTIONS.disable_vabc = True
908    if OPTIONS.vabc_compression_param is None and \
909            source_info.vabc_compression_param != target_info.vabc_compression_param:
910      logger.info("Source build and target build use different compression methods {} vs {}, default to source builds parameter {}".format(
911          source_info.vabc_compression_param, target_info.vabc_compression_param, source_info.vabc_compression_param))
912      vabc_compression_param = source_info.vabc_compression_param
913    # Virtual AB Cow version 3 is introduced in Android U with improved memory
914    # and install time performance. All OTA's with
915    # both the source build and target build with VIRTUAL_AB_COW_VERSION = 3
916    # can support the new format. Otherwise, fallback on older versions
917    if not OPTIONS.vabc_cow_version:
918      if not source_info.vabc_cow_version or not target_info.vabc_cow_version:
919        logger.info("Source or Target doesn't have VABC_COW_VERSION specified, default to version 2")
920        OPTIONS.vabc_cow_version = 2
921      elif source_info.vabc_cow_version != target_info.vabc_cow_version:
922        logger.info("Source and Target have different cow VABC_COW_VERSION specified, default to minimum version")
923        OPTIONS.vabc_cow_version = min(source_info.vabc_cow_version, target_info.vabc_cow_version)
924
925    # Virtual AB Compression was introduced in Androd S.
926    # Later, we backported VABC to Android R. But verity support was not
927    # backported, so if VABC is used and we are on Android R, disable
928    # verity computation.
929    if not OPTIONS.disable_vabc and source_info.is_android_r:
930      OPTIONS.disable_verity_computation = True
931      OPTIONS.disable_fec_computation = True
932
933  else:
934    assert "ab_partitions" in OPTIONS.info_dict, \
935        "META/ab_partitions.txt is required for ab_update."
936    source_info = None
937    if not OPTIONS.vabc_cow_version:
938      if not target_info.vabc_cow_version:
939          OPTIONS.vabc_cow_version = 2
940      elif target_info.vabc_cow_version >= "3" and target_info.vendor_api_level < 35:
941        logger.warning(
942              "This full OTA is configured to use VABC cow version"
943              " 3 which is supported since"
944              " Android API level 35, but device is "
945              "launched with {} . If this full OTA is"
946              " served to a device running old build, OTA might fail due to "
947              "unsupported vabc cow version. For safety, version 2 is used because "
948              "it's supported since day 1.".format(
949                  target_info.vendor_api_level))
950        OPTIONS.vabc_cow_version = 2
951    if OPTIONS.vabc_compression_param is None and vabc_compression_param:
952      minimum_api_level_required = VABC_COMPRESSION_PARAM_SUPPORT[
953          vabc_compression_param]
954      if target_info.vendor_api_level < minimum_api_level_required:
955        logger.warning(
956            "This full OTA is configured to use VABC compression algorithm"
957            " {}, which is supported since"
958            " Android API level {}, but device is "
959            "launched with {} . If this full OTA is"
960            " served to a device running old build, OTA might fail due to "
961            "unsupported compression parameter. For safety, gz is used because "
962            "it's supported since day 1.".format(
963                vabc_compression_param,
964                minimum_api_level_required,
965                target_info.vendor_api_level))
966        vabc_compression_param = "gz"
967
968  if OPTIONS.partial == []:
969    logger.info(
970        "Automatically detecting partial partition list from input target files.")
971    OPTIONS.partial = target_info.get(
972        "partial_ota_update_partitions_list").split()
973    assert OPTIONS.partial, "Input target_file does not have"
974    " partial_ota_update_partitions_list defined, failed to auto detect partial"
975    " partition list. Please specify list of partitions to update manually via"
976    " --partial=a,b,c , or generate a complete OTA by removing the --partial"
977    " option"
978    OPTIONS.partial.sort()
979    if source_info:
980      source_partial_list = source_info.get(
981          "partial_ota_update_partitions_list").split()
982      if source_partial_list:
983        source_partial_list.sort()
984        if source_partial_list != OPTIONS.partial:
985          logger.warning("Source build and target build have different partial partition lists. Source: %s, target: %s, taking the intersection.",
986                         source_partial_list, OPTIONS.partial)
987          OPTIONS.partial = list(
988              set(OPTIONS.partial) and set(source_partial_list))
989          OPTIONS.partial.sort()
990    logger.info("Automatically deduced partial partition list: %s",
991                OPTIONS.partial)
992
993  if target_info.vendor_suppressed_vabc:
994    logger.info("Vendor suppressed VABC. Disabling")
995    OPTIONS.disable_vabc = True
996
997  # Both source and target build need to support VABC XOR for us to use it.
998  # Source build's update_engine must be able to write XOR ops, and target
999  # build's snapuserd must be able to interpret XOR ops.
1000  if not target_info.is_vabc_xor or OPTIONS.disable_vabc or \
1001          (source_info is not None and not source_info.is_vabc_xor):
1002    logger.info("VABC XOR Not supported, disabling")
1003    OPTIONS.enable_vabc_xor = False
1004
1005  if OPTIONS.vabc_compression_param == "none":
1006    logger.info(
1007        "VABC Compression algorithm is set to 'none', disabling VABC xor")
1008    OPTIONS.enable_vabc_xor = False
1009
1010  if OPTIONS.enable_vabc_xor:
1011    api_level = -1
1012    if source_info is not None:
1013      api_level = source_info.vendor_api_level
1014    if api_level == -1:
1015      api_level = target_info.vendor_api_level
1016
1017    # XOR is only supported on T and higher.
1018    if api_level < 33:
1019      logger.error("VABC XOR not supported on this vendor, disabling")
1020      OPTIONS.enable_vabc_xor = False
1021
1022  if OPTIONS.vabc_compression_param:
1023    vabc_compression_param = OPTIONS.vabc_compression_param
1024
1025  additional_args = []
1026
1027  # Prepare custom images.
1028  if OPTIONS.custom_images:
1029    target_file = GetTargetFilesZipForCustomImagesUpdates(
1030        target_file, OPTIONS.custom_images)
1031
1032  if OPTIONS.retrofit_dynamic_partitions:
1033    target_file = GetTargetFilesZipForRetrofitDynamicPartitions(
1034        target_file, target_info.get("super_block_devices").strip().split(),
1035        target_info.get("dynamic_partition_list").strip().split())
1036  elif OPTIONS.partial:
1037    target_file = GetTargetFilesZipForPartialUpdates(target_file,
1038                                                     OPTIONS.partial)
1039  if vabc_compression_param != target_info.vabc_compression_param:
1040    target_file = GetTargetFilesZipForCustomVABCCompression(
1041        target_file, vabc_compression_param)
1042  if OPTIONS.vabc_cow_version:
1043    target_file = ModifyTargetFilesDynamicPartitionInfo(target_file, "virtual_ab_cow_version", OPTIONS.vabc_cow_version)
1044  if OPTIONS.compression_factor:
1045    target_file = ModifyTargetFilesDynamicPartitionInfo(target_file, "virtual_ab_compression_factor", OPTIONS.compression_factor)
1046  if OPTIONS.skip_postinstall:
1047    target_file = GetTargetFilesZipWithoutPostinstallConfig(target_file)
1048  # Target_file may have been modified, reparse ab_partitions
1049  target_info.info_dict['ab_partitions'] = common.ReadFromInputFile(target_file,
1050                                                                    AB_PARTITIONS).strip().split("\n")
1051
1052  from check_target_files_vintf import CheckVintfIfTrebleEnabled
1053  CheckVintfIfTrebleEnabled(target_file, target_info)
1054
1055  # Allow boot_variable_file to also exist in target-files
1056  if OPTIONS.boot_variable_file:
1057    if not os.path.isfile(OPTIONS.boot_variable_file):
1058      OPTIONS.boot_variable_file = os.path.join(target_file, OPTIONS.boot_variable_file)
1059  # Metadata to comply with Android OTA package format.
1060  metadata = GetPackageMetadata(target_info, source_info)
1061  # Generate payload.
1062  payload = PayloadGenerator(
1063      wipe_user_data=OPTIONS.wipe_user_data, minor_version=OPTIONS.force_minor_version, is_partial_update=OPTIONS.partial, spl_downgrade=OPTIONS.spl_downgrade)
1064
1065  partition_timestamps_flags = []
1066  # Enforce a max timestamp this payload can be applied on top of.
1067  if OPTIONS.downgrade:
1068    # When generating ota between merged target-files, partition build date can
1069    # decrease in target, at the same time as ro.build.date.utc increases,
1070    # so always pick largest value.
1071    max_timestamp = max(source_info.GetBuildProp("ro.build.date.utc"),
1072        str(metadata.postcondition.timestamp))
1073    partition_timestamps_flags = GeneratePartitionTimestampFlagsDowngrade(
1074        metadata.precondition.partition_state,
1075        metadata.postcondition.partition_state
1076    )
1077  else:
1078    max_timestamp = str(metadata.postcondition.timestamp)
1079    partition_timestamps_flags = GeneratePartitionTimestampFlags(
1080        metadata.postcondition.partition_state)
1081
1082  if not ota_utils.IsZucchiniCompatible(source_file, target_file):
1083    logger.warning(
1084        "Builds doesn't support zucchini, or source/target don't have compatible zucchini versions. Disabling zucchini.")
1085    OPTIONS.enable_zucchini = False
1086
1087  security_patch_level = target_info.GetBuildProp(
1088      "ro.build.version.security_patch")
1089  if OPTIONS.security_patch_level is not None:
1090    security_patch_level = OPTIONS.security_patch_level
1091
1092  additional_args += ["--security_patch_level", security_patch_level]
1093
1094  if OPTIONS.max_threads:
1095    additional_args += ["--max_threads", OPTIONS.max_threads]
1096
1097  additional_args += ["--enable_zucchini=" +
1098                      str(OPTIONS.enable_zucchini).lower()]
1099  if OPTIONS.enable_puffdiff is not None:
1100    additional_args += ["--enable_puffdiff=" +
1101                        str(OPTIONS.enable_puffdiff).lower()]
1102
1103  if not ota_utils.IsLz4diffCompatible(source_file, target_file):
1104    logger.warning(
1105        "Source build doesn't support lz4diff, or source/target don't have compatible lz4diff versions. Disabling lz4diff.")
1106    OPTIONS.enable_lz4diff = False
1107
1108  additional_args += ["--enable_lz4diff=" +
1109                      str(OPTIONS.enable_lz4diff).lower()]
1110
1111  if source_file and OPTIONS.enable_lz4diff:
1112    input_tmp = common.UnzipTemp(source_file, ["META/liblz4.so"])
1113    liblz4_path = os.path.join(input_tmp, "META", "liblz4.so")
1114    assert os.path.exists(
1115        liblz4_path), "liblz4.so not found in META/ dir of target file {}".format(liblz4_path)
1116    logger.info("Enabling lz4diff %s", liblz4_path)
1117    additional_args += ["--liblz4_path", liblz4_path]
1118    erofs_compression_param = OPTIONS.target_info_dict.get(
1119        "erofs_default_compressor")
1120    assert erofs_compression_param is not None, "'erofs_default_compressor' not found in META/misc_info.txt of target build. This is required to enable lz4diff."
1121    additional_args += ["--erofs_compression_param", erofs_compression_param]
1122
1123  if OPTIONS.disable_vabc:
1124    additional_args += ["--disable_vabc=true"]
1125  if OPTIONS.enable_vabc_xor:
1126    additional_args += ["--enable_vabc_xor=true"]
1127  if OPTIONS.compressor_types:
1128    additional_args += ["--compressor_types", OPTIONS.compressor_types]
1129  additional_args += ["--max_timestamp", max_timestamp]
1130
1131  payload.Generate(
1132      target_file,
1133      source_file,
1134      additional_args + partition_timestamps_flags
1135  )
1136
1137  # Sign the payload.
1138  pw = OPTIONS.key_passwords[OPTIONS.package_key]
1139  payload_signer = PayloadSigner(
1140      OPTIONS.package_key, OPTIONS.private_key_suffix,
1141      pw, OPTIONS.payload_signer)
1142  payload.Sign(payload_signer)
1143
1144  # Write the payload into output zip.
1145  payload.WriteToZip(output_zip)
1146
1147  # Generate and include the secondary payload that installs secondary images
1148  # (e.g. system_other.img).
1149  if OPTIONS.include_secondary:
1150    # We always include a full payload for the secondary slot, even when
1151    # building an incremental OTA. See the comments for "--include_secondary".
1152    secondary_target_file = GetTargetFilesZipForSecondaryImages(
1153        target_file, OPTIONS.skip_postinstall)
1154    secondary_payload = PayloadGenerator(secondary=True)
1155    secondary_payload.Generate(secondary_target_file,
1156                               additional_args=["--max_timestamp",
1157                                                max_timestamp])
1158    secondary_payload.Sign(payload_signer)
1159    secondary_payload.WriteToZip(output_zip)
1160
1161  # If dm-verity is supported for the device, copy contents of care_map
1162  # into A/B OTA package.
1163  if target_info.get("avb_enable") == "true":
1164    # Adds care_map if either the protobuf format or the plain text one exists.
1165    for care_map_name in ["care_map.pb", "care_map.txt"]:
1166      if not DoesInputFileContain(target_file, "META/" + care_map_name):
1167        continue
1168      care_map_data = common.ReadBytesFromInputFile(
1169          target_file, "META/" + care_map_name)
1170      # In order to support streaming, care_map needs to be packed as
1171      # ZIP_STORED.
1172      common.ZipWriteStr(output_zip, care_map_name, care_map_data,
1173                         compress_type=zipfile.ZIP_STORED)
1174      # break here to avoid going into else when care map has been handled
1175      break
1176    else:
1177      logger.warning("Cannot find care map file in target_file package")
1178
1179  # Add the source apex version for incremental ota updates, and write the
1180  # result apex info to the ota package.
1181  ota_apex_info = ota_utils.ConstructOtaApexInfo(target_file, source_file)
1182  if ota_apex_info is not None:
1183    common.ZipWriteStr(output_zip, "apex_info.pb", ota_apex_info,
1184                       compress_type=zipfile.ZIP_STORED)
1185
1186  # We haven't written the metadata entry yet, which will be handled in
1187  # FinalizeMetadata().
1188  common.ZipClose(output_zip)
1189
1190  FinalizeMetadata(metadata, staging_file, output_file,
1191                   package_key=OPTIONS.package_key)
1192
1193
1194def main(argv):
1195
1196  def option_handler(o, a):
1197    if o in ("-i", "--incremental_from"):
1198      OPTIONS.incremental_source = a
1199    elif o == "--full_radio":
1200      OPTIONS.full_radio = True
1201    elif o == "--full_bootloader":
1202      OPTIONS.full_bootloader = True
1203    elif o == "--wipe_user_data":
1204      OPTIONS.wipe_user_data = True
1205    elif o == "--downgrade":
1206      OPTIONS.downgrade = True
1207      OPTIONS.wipe_user_data = True
1208    elif o == "--override_timestamp":
1209      OPTIONS.downgrade = True
1210    elif o in ("-o", "--oem_settings"):
1211      OPTIONS.oem_source = a.split(',')
1212    elif o == "--oem_no_mount":
1213      OPTIONS.oem_no_mount = True
1214    elif o in ("-e", "--extra_script"):
1215      OPTIONS.extra_script = a
1216    elif o in ("-t", "--worker_threads"):
1217      if a.isdigit():
1218        OPTIONS.worker_threads = int(a)
1219      else:
1220        raise ValueError("Cannot parse value %r for option %r - only "
1221                         "integers are allowed." % (a, o))
1222    elif o in ("-2", "--two_step"):
1223      OPTIONS.two_step = True
1224    elif o == "--include_secondary":
1225      OPTIONS.include_secondary = True
1226    elif o == "--no_signing":
1227      OPTIONS.no_signing = True
1228    elif o == "--verify":
1229      OPTIONS.verify = True
1230    elif o == "--block":
1231      OPTIONS.block_based = True
1232    elif o in ("-b", "--binary"):
1233      OPTIONS.updater_binary = a
1234    elif o == "--stash_threshold":
1235      try:
1236        OPTIONS.stash_threshold = float(a)
1237      except ValueError:
1238        raise ValueError("Cannot parse value %r for option %r - expecting "
1239                         "a float" % (a, o))
1240    elif o == "--log_diff":
1241      OPTIONS.log_diff = a
1242    elif o == "--extracted_input_target_files":
1243      OPTIONS.extracted_input = a
1244    elif o == "--skip_postinstall":
1245      OPTIONS.skip_postinstall = True
1246    elif o == "--retrofit_dynamic_partitions":
1247      OPTIONS.retrofit_dynamic_partitions = True
1248    elif o == "--skip_compatibility_check":
1249      OPTIONS.skip_compatibility_check = True
1250    elif o == "--output_metadata_path":
1251      OPTIONS.output_metadata_path = a
1252    elif o == "--disable_fec_computation":
1253      OPTIONS.disable_fec_computation = True
1254    elif o == "--disable_verity_computation":
1255      OPTIONS.disable_verity_computation = True
1256    elif o == "--force_non_ab":
1257      OPTIONS.force_non_ab = True
1258    elif o == "--boot_variable_file":
1259      OPTIONS.boot_variable_file = a
1260    elif o == "--partial":
1261      if a:
1262        partitions = a.split()
1263        if not partitions:
1264          raise ValueError("Cannot parse partitions in {}".format(a))
1265      else:
1266        partitions = []
1267      OPTIONS.partial = partitions
1268    elif o == "--custom_image":
1269      custom_partition, custom_image = a.split("=")
1270      OPTIONS.custom_images[custom_partition] = custom_image
1271    elif o == "--disable_vabc":
1272      OPTIONS.disable_vabc = True
1273    elif o == "--spl_downgrade":
1274      OPTIONS.spl_downgrade = True
1275      OPTIONS.wipe_user_data = True
1276    elif o == "--vabc_downgrade":
1277      OPTIONS.vabc_downgrade = True
1278    elif o == "--enable_vabc_xor":
1279      assert a.lower() in ["true", "false"]
1280      OPTIONS.enable_vabc_xor = a.lower() != "false"
1281    elif o == "--force_minor_version":
1282      OPTIONS.force_minor_version = a
1283    elif o == "--compressor_types":
1284      OPTIONS.compressor_types = a
1285    elif o == "--enable_zucchini":
1286      assert a.lower() in ["true", "false"]
1287      OPTIONS.enable_zucchini = a.lower() != "false"
1288    elif o == "--enable_puffdiff":
1289      assert a.lower() in ["true", "false"]
1290      OPTIONS.enable_puffdiff = a.lower() != "false"
1291    elif o == "--enable_lz4diff":
1292      assert a.lower() in ["true", "false"]
1293      OPTIONS.enable_lz4diff = a.lower() != "false"
1294    elif o == "--vabc_compression_param":
1295      words = a.split(",")
1296      assert len(words) >= 1 and len(words) <= 2
1297      OPTIONS.vabc_compression_param = a.lower()
1298      if len(words) == 2:
1299        if not words[1].lstrip("-").isdigit():
1300          raise ValueError("Cannot parse value %r for option $COMPRESSION_LEVEL - only "
1301                           "integers are allowed." % words[1])
1302    elif o == "--security_patch_level":
1303      OPTIONS.security_patch_level = a
1304    elif o in ("--max_threads"):
1305      if a.isdigit():
1306        OPTIONS.max_threads = a
1307      else:
1308        raise ValueError("Cannot parse value %r for option %r - only "
1309                         "integers are allowed." % (a, o))
1310    elif o in ("--compression_factor"):
1311        values = ["4k", "8k", "16k", "32k", "64k", "128k", "256k"]
1312        if a[:-1].isdigit() and a in values and a.endswith("k"):
1313            OPTIONS.compression_factor = str(int(a[:-1]) * 1024)
1314        else:
1315            raise ValueError("Please specify value from following options: 4k, 8k, 16k, 32k, 64k, 128k", "256k")
1316
1317    elif o == "--vabc_cow_version":
1318      if a.isdigit():
1319        OPTIONS.vabc_cow_version = a
1320      else:
1321        raise ValueError("Cannot parse value %r for option %r - only "
1322                         "integers are allowed." % (a, o))
1323    else:
1324      return False
1325    return True
1326
1327  args = common.ParseOptions(argv, __doc__,
1328                             extra_opts="b:k:i:d:e:t:2o:",
1329                             extra_long_opts=[
1330                                 "incremental_from=",
1331                                 "full_radio",
1332                                 "full_bootloader",
1333                                 "wipe_user_data",
1334                                 "downgrade",
1335                                 "override_timestamp",
1336                                 "extra_script=",
1337                                 "worker_threads=",
1338                                 "two_step",
1339                                 "include_secondary",
1340                                 "no_signing",
1341                                 "block",
1342                                 "binary=",
1343                                 "oem_settings=",
1344                                 "oem_no_mount",
1345                                 "verify",
1346                                 "stash_threshold=",
1347                                 "log_diff=",
1348                                 "extracted_input_target_files=",
1349                                 "skip_postinstall",
1350                                 "retrofit_dynamic_partitions",
1351                                 "skip_compatibility_check",
1352                                 "output_metadata_path=",
1353                                 "disable_fec_computation",
1354                                 "disable_verity_computation",
1355                                 "force_non_ab",
1356                                 "boot_variable_file=",
1357                                 "partial=",
1358                                 "custom_image=",
1359                                 "disable_vabc",
1360                                 "spl_downgrade",
1361                                 "vabc_downgrade",
1362                                 "enable_vabc_xor=",
1363                                 "force_minor_version=",
1364                                 "compressor_types=",
1365                                 "enable_zucchini=",
1366                                 "enable_puffdiff=",
1367                                 "enable_lz4diff=",
1368                                 "vabc_compression_param=",
1369                                 "security_patch_level=",
1370                                 "max_threads=",
1371                                 "vabc_cow_version=",
1372                                 "compression_factor=",
1373                             ], extra_option_handler=[option_handler, payload_signer.signer_options])
1374  common.InitLogging()
1375
1376  if len(args) != 2:
1377    common.Usage(__doc__)
1378    sys.exit(1)
1379
1380  # Load the build info dicts from the zip directly or the extracted input
1381  # directory. We don't need to unzip the entire target-files zips, because they
1382  # won't be needed for A/B OTAs (brillo_update_payload does that on its own).
1383  # When loading the info dicts, we don't need to provide the second parameter
1384  # to common.LoadInfoDict(). Specifying the second parameter allows replacing
1385  # some properties with their actual paths, such as 'selinux_fc',
1386  # 'ramdisk_dir', which won't be used during OTA generation.
1387  if OPTIONS.extracted_input is not None:
1388    OPTIONS.info_dict = common.LoadInfoDict(OPTIONS.extracted_input)
1389  else:
1390    OPTIONS.info_dict = common.LoadInfoDict(args[0])
1391
1392  if OPTIONS.wipe_user_data:
1393    if not OPTIONS.vabc_downgrade:
1394      logger.info("Detected downgrade/datawipe OTA."
1395                  "When wiping userdata, VABC OTA makes the user "
1396                  "wait in recovery mode for merge to finish. Disable VABC by "
1397                  "default. If you really want to do VABC downgrade, pass "
1398                  "--vabc_downgrade")
1399      OPTIONS.disable_vabc = True
1400    # We should only allow downgrading incrementals (as opposed to full).
1401    # Otherwise the device may go back from arbitrary build with this full
1402    # OTA package.
1403  if OPTIONS.incremental_source is None and OPTIONS.downgrade:
1404    raise ValueError("Cannot generate downgradable full OTAs")
1405
1406  # TODO(xunchang) for retrofit and partial updates, maybe we should rebuild the
1407  # target-file and reload the info_dict. So the info will be consistent with
1408  # the modified target-file.
1409
1410  logger.info("--- target info ---")
1411  common.DumpInfoDict(OPTIONS.info_dict)
1412
1413  # Load the source build dict if applicable.
1414  if OPTIONS.incremental_source is not None:
1415    OPTIONS.target_info_dict = OPTIONS.info_dict
1416    OPTIONS.source_info_dict = ParseInfoDict(OPTIONS.incremental_source)
1417
1418    logger.info("--- source info ---")
1419    common.DumpInfoDict(OPTIONS.source_info_dict)
1420
1421  if OPTIONS.partial:
1422    OPTIONS.info_dict['ab_partitions'] = \
1423        list(
1424        set(OPTIONS.info_dict['ab_partitions']) & set(OPTIONS.partial)
1425    )
1426    if OPTIONS.source_info_dict:
1427      OPTIONS.source_info_dict['ab_partitions'] = \
1428          list(
1429          set(OPTIONS.source_info_dict['ab_partitions']) &
1430          set(OPTIONS.partial)
1431      )
1432
1433  # Load OEM dicts if provided.
1434  OPTIONS.oem_dicts = _LoadOemDicts(OPTIONS.oem_source)
1435
1436  # Assume retrofitting dynamic partitions when base build does not set
1437  # use_dynamic_partitions but target build does.
1438  if (OPTIONS.source_info_dict and
1439      OPTIONS.source_info_dict.get("use_dynamic_partitions") != "true" and
1440          OPTIONS.target_info_dict.get("use_dynamic_partitions") == "true"):
1441    if OPTIONS.target_info_dict.get("dynamic_partition_retrofit") != "true":
1442      raise common.ExternalError(
1443          "Expect to generate incremental OTA for retrofitting dynamic "
1444          "partitions, but dynamic_partition_retrofit is not set in target "
1445          "build.")
1446    logger.info("Implicitly generating retrofit incremental OTA.")
1447    OPTIONS.retrofit_dynamic_partitions = True
1448
1449  # Skip postinstall for retrofitting dynamic partitions.
1450  if OPTIONS.retrofit_dynamic_partitions:
1451    OPTIONS.skip_postinstall = True
1452
1453  ab_update = OPTIONS.info_dict.get("ab_update") == "true"
1454  allow_non_ab = OPTIONS.info_dict.get("allow_non_ab") == "true"
1455  if OPTIONS.force_non_ab:
1456    assert allow_non_ab,\
1457        "--force_non_ab only allowed on devices that supports non-A/B"
1458    assert ab_update, "--force_non_ab only allowed on A/B devices"
1459
1460  generate_ab = not OPTIONS.force_non_ab and ab_update
1461
1462  # Use the default key to sign the package if not specified with package_key.
1463  # package_keys are needed on ab_updates, so always define them if an
1464  # A/B update is getting created.
1465  if not OPTIONS.no_signing or generate_ab:
1466    if OPTIONS.package_key is None:
1467      OPTIONS.package_key = OPTIONS.info_dict.get(
1468          "default_system_dev_certificate",
1469          "build/make/target/product/security/testkey")
1470    # Get signing keys
1471    OPTIONS.key_passwords = common.GetKeyPasswords([OPTIONS.package_key])
1472
1473    # Only check for existence of key file if using the default signer.
1474    # Because the custom signer might not need the key file AT all.
1475    # b/191704641
1476    if not OPTIONS.payload_signer:
1477      private_key_path = OPTIONS.package_key + OPTIONS.private_key_suffix
1478      if not os.path.exists(private_key_path):
1479        raise common.ExternalError(
1480            "Private key {} doesn't exist. Make sure you passed the"
1481            " correct key path through -k option".format(
1482                private_key_path)
1483        )
1484      signapk_abs_path = os.path.join(
1485          OPTIONS.search_path, OPTIONS.signapk_path)
1486      if not os.path.exists(signapk_abs_path):
1487        raise common.ExternalError(
1488            "Failed to find sign apk binary {} in search path {}. Make sure the correct search path is passed via -p".format(OPTIONS.signapk_path, OPTIONS.search_path))
1489
1490  if OPTIONS.source_info_dict:
1491    source_build_prop = OPTIONS.source_info_dict["build.prop"]
1492    target_build_prop = OPTIONS.target_info_dict["build.prop"]
1493    source_spl = source_build_prop.GetProp(SECURITY_PATCH_LEVEL_PROP_NAME)
1494    target_spl = target_build_prop.GetProp(SECURITY_PATCH_LEVEL_PROP_NAME)
1495    is_spl_downgrade = target_spl < source_spl
1496    if is_spl_downgrade and target_build_prop.GetProp("ro.build.tags") == "release-keys":
1497      raise common.ExternalError(
1498          "Target security patch level {} is older than source SPL {} "
1499          "A locked bootloader will reject SPL downgrade no matter "
1500          "what(even if data wipe is done), so SPL downgrade on any "
1501          "release-keys build is not allowed.".format(target_spl, source_spl))
1502
1503    logger.info("SPL downgrade on %s",
1504                target_build_prop.GetProp("ro.build.tags"))
1505    if is_spl_downgrade and not OPTIONS.spl_downgrade and not OPTIONS.downgrade:
1506      raise common.ExternalError(
1507          "Target security patch level {} is older than source SPL {} applying "
1508          "such OTA will likely cause device fail to boot. Pass --spl_downgrade "
1509          "to override this check. This script expects security patch level to "
1510          "be in format yyyy-mm-dd (e.x. 2021-02-05). It's possible to use "
1511          "separators other than -, so as long as it's used consistenly across "
1512          "all SPL dates".format(target_spl, source_spl))
1513    elif not is_spl_downgrade and OPTIONS.spl_downgrade:
1514      raise ValueError("--spl_downgrade specified but no actual SPL downgrade"
1515                       " detected. Please only pass in this flag if you want a"
1516                       " SPL downgrade. Target SPL: {} Source SPL: {}"
1517                       .format(target_spl, source_spl))
1518  if generate_ab:
1519    GenerateAbOtaPackage(
1520        target_file=args[0],
1521        output_file=args[1],
1522        source_file=OPTIONS.incremental_source)
1523
1524  else:
1525    GenerateNonAbOtaPackage(
1526        target_file=args[0],
1527        output_file=args[1],
1528        source_file=OPTIONS.incremental_source)
1529
1530  # Post OTA generation works.
1531  if OPTIONS.incremental_source is not None and OPTIONS.log_diff:
1532    logger.info("Generating diff logs...")
1533    logger.info("Unzipping target-files for diffing...")
1534    target_dir = common.UnzipTemp(args[0], TARGET_DIFFING_UNZIP_PATTERN)
1535    source_dir = common.UnzipTemp(
1536        OPTIONS.incremental_source, TARGET_DIFFING_UNZIP_PATTERN)
1537
1538    with open(OPTIONS.log_diff, 'w') as out_file:
1539      target_files_diff.recursiveDiff(
1540          '', source_dir, target_dir, out_file)
1541
1542  logger.info("done.")
1543
1544
1545if __name__ == '__main__':
1546  try:
1547    common.CloseInheritedPipes()
1548    main(sys.argv[1:])
1549  finally:
1550    common.Cleanup()
1551