Forum | Documentation | Website | Blog

Skip to content
Snippets Groups Projects
  1. Aug 01, 2024
  2. Jun 12, 2024
  3. May 10, 2024
  4. Feb 15, 2024
  5. Jan 12, 2024
  6. Oct 16, 2023
    • Mark Rutland's avatar
      arm64: Fixup user features at boot time · 7f632d33
      Mark Rutland authored
      
      For ARM64_WORKAROUND_2658417, we use a cpu_enable() callback to hide the
      ID_AA64ISAR1_EL1.BF16 ID register field. This is a little awkward as
      CPUs may attempt to apply the workaround concurrently, requiring that we
      protect the bulk of the callback with a raw_spinlock, and requiring some
      pointless work every time a CPU is subsequently hotplugged in.
      
      This patch makes this a little simpler by handling the masking once at
      boot time. A new user_feature_fixup() function is called at the start of
      setup_user_features() to mask the feature, matching the style of
      elf_hwcap_fixup(). The ARM64_WORKAROUND_2658417 cpucap is added to
      cpucap_is_possible() so that code can be elided entirely when this is
      not possible.
      
      Note that the ARM64_WORKAROUND_2658417 capability is matched with
      ERRATA_MIDR_RANGE(), which implicitly gives the capability a
      ARM64_CPUCAP_LOCAL_CPU_ERRATUM type, which forbids the late onlining of
      a CPU with the erratum if the erratum was not present at boot time.
      Therefore this patch doesn't change the behaviour for late onlining.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7f632d33
  7. Sep 29, 2023
    • Rob Herring's avatar
      arm64: errata: Add Cortex-A520 speculative unprivileged load workaround · 471470bc
      Rob Herring authored
      
      Implement the workaround for ARM Cortex-A520 erratum 2966298. On an
      affected Cortex-A520 core, a speculatively executed unprivileged load
      might leak data from a privileged load via a cache side channel. The
      issue only exists for loads within a translation regime with the same
      translation (e.g. same ASID and VMID). Therefore, the issue only affects
      the return to EL0.
      
      The workaround is to execute a TLBI before returning to EL0 after all
      loads of privileged data. A non-shareable TLBI to any address is
      sufficient.
      
      The workaround isn't necessary if page table isolation (KPTI) is
      enabled, but for simplicity it will be. Page table isolation should
      normally be disabled for Cortex-A520 as it supports the CSV3 feature
      and the E0PD feature (used when KASLR is enabled).
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarRob Herring <robh@kernel.org>
      Link: https://lore.kernel.org/r/20230921194156.1050055-2-robh@kernel.org
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      471470bc
  8. Jun 15, 2023
  9. Jan 06, 2023
    • Anshuman Khandual's avatar
      arm64: errata: Workaround possible Cortex-A715 [ESR|FAR]_ELx corruption · 5db568e7
      Anshuman Khandual authored
      
      If a Cortex-A715 cpu sees a page mapping permissions change from executable
      to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers, on the
      next instruction abort caused by permission fault.
      
      Only user-space does executable to non-executable permission transition via
      mprotect() system call which calls ptep_modify_prot_start() and ptep_modify
      _prot_commit() helpers, while changing the page mapping. The platform code
      can override these helpers via __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION.
      
      Work around the problem via doing a break-before-make TLB invalidation, for
      all executable user space mappings, that go through mprotect() system call.
      This overrides ptep_modify_prot_start() and ptep_modify_prot_commit(), via
      defining HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION on the platform thus giving
      an opportunity to intercept user space exec mappings, and do the necessary
      TLB invalidation. Similar interceptions are also implemented for HugeTLB.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-doc@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Link: https://lore.kernel.org/r/20230102061651.34745-1-anshuman.khandual@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      5db568e7
  10. Dec 15, 2022
  11. Nov 18, 2022
    • Anshuman Khandual's avatar
      arm64: errata: Workaround possible Cortex-A715 [ESR|FAR]_ELx corruption · 44ecda71
      Anshuman Khandual authored
      
      If a Cortex-A715 cpu sees a page mapping permissions change from executable
      to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers, on the
      next instruction abort caused by permission fault.
      
      Only user-space does executable to non-executable permission transition via
      mprotect() system call which calls ptep_modify_prot_start() and ptep_modify
      _prot_commit() helpers, while changing the page mapping. The platform code
      can override these helpers via __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION.
      
      Work around the problem via doing a break-before-make TLB invalidation, for
      all executable user space mappings, that go through mprotect() system call.
      This overrides ptep_modify_prot_start() and ptep_modify_prot_commit(), via
      defining HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION on the platform thus giving
      an opportunity to intercept user space exec mappings, and do the necessary
      TLB invalidation. Similar interceptions are also implemented for HugeTLB.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-doc@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/20221116140915.356601-3-anshuman.khandual@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      44ecda71
  12. Oct 07, 2022
  13. Sep 16, 2022
  14. Aug 23, 2022
    • Ionela Voinescu's avatar
      arm64: errata: add detection for AMEVCNTR01 incrementing incorrectly · e89d120c
      Ionela Voinescu authored
      
      The AMU counter AMEVCNTR01 (constant counter) should increment at the same
      rate as the system counter. On affected Cortex-A510 cores, AMEVCNTR01
      increments incorrectly giving a significantly higher output value. This
      results in inaccurate task scheduler utilization tracking and incorrect
      feedback on CPU frequency.
      
      Work around this problem by returning 0 when reading the affected counter
      in key locations that results in disabling all users of this counter from
      using it either for frequency invariance or as FFH reference counter. This
      effect is the same to firmware disabling affected counters.
      
      Details on how the two features are affected by this erratum:
      
       - AMU counters will not be used for frequency invariance for affected
         CPUs and CPUs in the same cpufreq policy. AMUs can still be used for
         frequency invariance for unaffected CPUs in the system. Although
         unlikely, if no alternative method can be found to support frequency
         invariance for affected CPUs (cpufreq based or solution based on
         platform counters) frequency invariance will be disabled. Please check
         the chapter on frequency invariance at
         Documentation/scheduler/sched-capacity.rst for details of its effect.
      
       - Given that FFH can be used to fetch either the core or constant counter
         values, restrictions are lifted regarding any of these counters
         returning a valid (!0) value. Therefore FFH is considered supported
         if there is a least one CPU that support AMUs, independent of any
         counters being disabled or affected by this erratum. Clarifying
         comments are now added to the cpc_ffh_supported(), cpu_read_constcnt()
         and cpu_read_corecnt() functions.
      
      The above is achieved through adding a new erratum: ARM64_ERRATUM_2457168.
      
      Signed-off-by: default avatarIonela Voinescu <ionela.voinescu@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Link: https://lore.kernel.org/r/20220819103050.24211-1-ionela.voinescu@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      e89d120c
  15. Aug 17, 2022
  16. Jul 19, 2022
  17. Jul 05, 2022
  18. May 12, 2022
  19. May 11, 2022
  20. Mar 18, 2022
    • Arnd Bergmann's avatar
      arm64: errata: avoid duplicate field initializer · 316e46f6
      Arnd Bergmann authored
      
      The '.type' field is initialized both in place and in the macro
      as reported by this W=1 warning:
      
      arch/arm64/include/asm/cpufeature.h:281:9: error: initialized field overwritten [-Werror=override-init]
        281 |         (ARM64_CPUCAP_SCOPE_LOCAL_CPU | ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU)
            |         ^
      arch/arm64/kernel/cpu_errata.c:136:17: note: in expansion of macro 'ARM64_CPUCAP_LOCAL_CPU_ERRATUM'
        136 |         .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,                         \
            |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      arch/arm64/kernel/cpu_errata.c:145:9: note: in expansion of macro 'ERRATA_MIDR_RANGE'
        145 |         ERRATA_MIDR_RANGE(m, var, r_min, var, r_max)
            |         ^~~~~~~~~~~~~~~~~
      arch/arm64/kernel/cpu_errata.c:613:17: note: in expansion of macro 'ERRATA_MIDR_REV_RANGE'
        613 |                 ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2),
            |                 ^~~~~~~~~~~~~~~~~~~~~
      arch/arm64/include/asm/cpufeature.h:281:9: note: (near initialization for 'arm64_errata[18].type')
        281 |         (ARM64_CPUCAP_SCOPE_LOCAL_CPU | ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU)
            |         ^
      
      Remove the extranous initializer.
      
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Fixes: 1dd498e5 ("KVM: arm64: Workaround Cortex-A510's single-step and PAC trap errata")
      Link: https://lore.kernel.org/r/20220316183800.1546731-1-arnd@kernel.org
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      316e46f6
  21. Mar 09, 2022
  22. Mar 07, 2022
  23. Feb 24, 2022
    • James Morse's avatar
      arm64: Mitigate spectre style branch history side channels · 558c303c
      James Morse authored
      
      Speculation attacks against some high-performance processors can
      make use of branch history to influence future speculation.
      When taking an exception from user-space, a sequence of branches
      or a firmware call overwrites or invalidates the branch history.
      
      The sequence of branches is added to the vectors, and should appear
      before the first indirect branch. For systems using KPTI the sequence
      is added to the kpti trampoline where it has a free register as the exit
      from the trampoline is via a 'ret'. For systems not using KPTI, the same
      register tricks are used to free up a register in the vectors.
      
      For the firmware call, arch-workaround-3 clobbers 4 registers, so
      there is no choice but to save them to the EL1 stack. This only happens
      for entry from EL0, so if we take an exception due to the stack access,
      it will not become re-entrant.
      
      For KVM, the existing branch-predictor-hardening vectors are used.
      When a spectre version of these vectors is in use, the firmware call
      is sufficient to mitigate against Spectre-BHB. For the non-spectre
      versions, the sequence of branches is added to the indirect vector.
      
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      558c303c
  24. Feb 03, 2022
  25. Jan 27, 2022
  26. Jan 24, 2022
  27. Oct 21, 2021
  28. Mar 25, 2021
  29. Feb 08, 2021
    • Mark Rutland's avatar
      arm64: entry: consolidate Cortex-A76 erratum 1463225 workaround · 6459b846
      Mark Rutland authored
      
      The workaround for Cortex-A76 erratum 1463225 is split across the
      syscall and debug handlers in separate files. This structure currently
      forces us to do some redundant work for debug exceptions from EL0, is a
      little difficult to follow, and gets in the way of some future rework of
      the exception entry code as it requires exceptions to be unmasked late
      in the syscall handling path.
      
      To simplify things, and as a preparatory step for future rework of
      exception entry, this patch moves all the workaround logic into
      entry-common.c. As the debug handler only needs to run for EL1 debug
      exceptions, we no longer call it for EL0 debug exceptions, and no longer
      need to check user_mode(regs) as this is always false. For clarity
      cortex_a76_erratum_1463225_debug_handler() is changed to return bool.
      
      In the SVC path, the workaround is applied earlier, but this should have
      no functional impact as exceptions are still masked. In the debug path
      we run the fixup before explicitly disabling preemption, but we will not
      attempt to preempt before returning from the exception.
      
      There should be no functional change as a result of this patch.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20210202120341.28858-1-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      6459b846
  30. Nov 16, 2020
  31. Nov 13, 2020