libtsan: update to LLVM 22

This commit is contained in:
Alex Rønne Petersen
2026-01-17 06:02:54 +01:00
parent e79b4e907a
commit dbaea8d67e
74 changed files with 1913 additions and 558 deletions
+311
View File
@@ -0,0 +1,311 @@
==============================================================================
The LLVM Project is under the Apache License v2.0 with LLVM Exceptions:
==============================================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---- LLVM Exceptions to the Apache 2.0 License ----
As an exception, if, as a result of your compiling your source code, portions
of this Software are embedded into an Object form of such source code, you
may redistribute such embedded portions in such Object form without complying
with the conditions of Sections 4(a), 4(b) and 4(d) of the License.
In addition, if you combine or link compiled forms of this Software with
software that is licensed under the GPLv2 ("Combined Software") and if a
court of competent jurisdiction determines that the patent provision (Section
3), the indemnity provision (Section 9) or other Section of the License
conflicts with the conditions of the GPLv2, you may retroactively and
prospectively choose to deem waived or otherwise exclude such Section(s) of
the License, but only in their entirety and only with respect to the Combined
Software.
==============================================================================
Software from third parties included in the LLVM Project:
==============================================================================
The LLVM Project contains third party software which is under different license
terms. All such code will be identified clearly using at least one of two
mechanisms:
1) It will be in a separate directory tree with its own `LICENSE.txt` or
`LICENSE` file at the top containing the specific license and restrictions
which apply to that software, or
2) It will contain specific license and restriction terms at the top of every
file.
==============================================================================
Legacy LLVM License (https://llvm.org/docs/DeveloperPolicy.html#legacy):
==============================================================================
The compiler_rt library is dual licensed under both the University of Illinois
"BSD-Like" license and the MIT license. As a user of this code you may choose
to use it under either license. As a contributor, you agree to allow your code
to be used under both.
Full text of the relevant licenses is included below.
==============================================================================
University of Illinois/NCSA
Open Source License
Copyright (c) 2009-2019 by the contributors listed in CREDITS.TXT
All rights reserved.
Developed by:
LLVM Team
University of Illinois at Urbana-Champaign
http://llvm.org
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal with
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimers.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimers in the
documentation and/or other materials provided with the distribution.
* Neither the names of the LLVM Team, University of Illinois at
Urbana-Champaign, nor the names of its contributors may be used to
endorse or promote products derived from this Software without specific
prior written permission.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE
SOFTWARE.
==============================================================================
Copyright (c) 2009-2015 by the contributors listed in CREDITS.TXT
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
+41 -8
View File
@@ -14,7 +14,7 @@
#ifndef COMPILERRT_ASSEMBLY_H
#define COMPILERRT_ASSEMBLY_H
#if defined(__linux__) && defined(__CET__)
#ifdef __CET__
#if __has_include(<cet.h>)
#include <cet.h>
#endif
@@ -71,19 +71,35 @@
#endif
#if defined(__aarch64__) && defined(__ELF__) && \
defined(COMPILER_RT_EXECUTE_ONLY_CODE)
// The assembler always creates an implicit '.text' section with default flags
// (SHF_ALLOC | SHF_EXECINSTR), which is incompatible with the execute-only
// '.text' section we want to create here because of the missing
// SHF_AARCH64_PURECODE section flag. To solve this, we use 'unique,0' to
// differentiate the two sections. The output will therefore have two separate
// sections named '.text', where code will be placed into the execute-only
// '.text' section, and the implicitly-created one will be empty.
#define TEXT_SECTION \
.section .text,"axy",@progbits,unique,0
#else
#define TEXT_SECTION \
.text
#endif
#if defined(__arm__) || defined(__aarch64__) || defined(__arm64ec__)
#define FUNC_ALIGN \
.text SEPARATOR \
.balign 16 SEPARATOR
#else
#define FUNC_ALIGN
#endif
// BTI and PAC gnu property note
// BTI, PAC, and GCS gnu property note
#define NT_GNU_PROPERTY_TYPE_0 5
#define GNU_PROPERTY_AARCH64_FEATURE_1_AND 0xc0000000
#define GNU_PROPERTY_AARCH64_FEATURE_1_BTI 1
#define GNU_PROPERTY_AARCH64_FEATURE_1_PAC 2
#define GNU_PROPERTY_AARCH64_FEATURE_1_GCS 4
#if defined(__ARM_FEATURE_BTI_DEFAULT)
#define BTI_FLAG GNU_PROPERTY_AARCH64_FEATURE_1_BTI
@@ -97,6 +113,12 @@
#define PAC_FLAG 0
#endif
#if defined(__ARM_FEATURE_GCS_DEFAULT)
#define GCS_FLAG GNU_PROPERTY_AARCH64_FEATURE_1_GCS
#else
#define GCS_FLAG 0
#endif
#define GNU_PROPERTY(type, value) \
.pushsection .note.gnu.property, "a" SEPARATOR \
.p2align 3 SEPARATOR \
@@ -118,11 +140,12 @@
#define BTI_J
#endif
#if (BTI_FLAG | PAC_FLAG) != 0
#define GNU_PROPERTY_BTI_PAC \
GNU_PROPERTY(GNU_PROPERTY_AARCH64_FEATURE_1_AND, BTI_FLAG | PAC_FLAG)
#if (BTI_FLAG | PAC_FLAG | GCS_FLAG) != 0
#define GNU_PROPERTY_BTI_PAC_GCS \
GNU_PROPERTY(GNU_PROPERTY_AARCH64_FEATURE_1_AND, \
BTI_FLAG | PAC_FLAG | GCS_FLAG)
#else
#define GNU_PROPERTY_BTI_PAC
#define GNU_PROPERTY_BTI_PAC_GCS
#endif
#if defined(__clang__) || defined(__GCC_HAVE_DWARF2_CFI_ASM)
@@ -247,6 +270,7 @@
#endif
#define DEFINE_COMPILERRT_FUNCTION(name) \
TEXT_SECTION SEPARATOR \
DEFINE_CODE_STATE \
FILE_LEVEL_DIRECTIVE SEPARATOR \
.globl FUNC_SYMBOL(SYMBOL_NAME(name)) SEPARATOR \
@@ -256,6 +280,7 @@
FUNC_SYMBOL(SYMBOL_NAME(name)):
#define DEFINE_COMPILERRT_THUMB_FUNCTION(name) \
TEXT_SECTION SEPARATOR \
DEFINE_CODE_STATE \
FILE_LEVEL_DIRECTIVE SEPARATOR \
.globl FUNC_SYMBOL(SYMBOL_NAME(name)) SEPARATOR \
@@ -265,6 +290,7 @@
FUNC_SYMBOL(SYMBOL_NAME(name)):
#define DEFINE_COMPILERRT_PRIVATE_FUNCTION(name) \
TEXT_SECTION SEPARATOR \
DEFINE_CODE_STATE \
FILE_LEVEL_DIRECTIVE SEPARATOR \
.globl FUNC_SYMBOL(SYMBOL_NAME(name)) SEPARATOR \
@@ -274,6 +300,7 @@
FUNC_SYMBOL(SYMBOL_NAME(name)):
#define DEFINE_COMPILERRT_PRIVATE_FUNCTION_UNMANGLED(name) \
TEXT_SECTION SEPARATOR \
DEFINE_CODE_STATE \
.globl FUNC_SYMBOL(name) SEPARATOR \
SYMBOL_IS_FUNC(name) SEPARATOR \
@@ -282,6 +309,7 @@
FUNC_SYMBOL(name):
#define DEFINE_COMPILERRT_OUTLINE_FUNCTION_UNMANGLED(name) \
TEXT_SECTION SEPARATOR \
DEFINE_CODE_STATE \
FUNC_ALIGN \
.globl FUNC_SYMBOL(name) SEPARATOR \
@@ -296,7 +324,7 @@
.globl FUNC_SYMBOL(SYMBOL_NAME(name)) SEPARATOR \
SYMBOL_IS_FUNC(SYMBOL_NAME(name)) SEPARATOR \
DECLARE_SYMBOL_VISIBILITY(name) SEPARATOR \
.set FUNC_SYMBOL(SYMBOL_NAME(name)), FUNC_SYMBOL(target) SEPARATOR
.set FUNC_SYMBOL(SYMBOL_NAME(name)), FUNC_SYMBOL(SYMBOL_NAME(target)) SEPARATOR
#if defined(__ARM_EABI__)
#define DEFINE_AEABI_FUNCTION_ALIAS(aeabi_name, name) \
@@ -329,4 +357,9 @@
#endif
#endif
#if defined(__ASSEMBLER__) && (defined(__i386__) || defined(__amd64__)) && \
!defined(__arm64ec__)
.att_syntax
#endif
#endif // COMPILERRT_ASSEMBLY_H
+4
View File
@@ -646,6 +646,7 @@ static size_t GetInstructionSize(uptr address, size_t* rel_offset = nullptr) {
case 0xC033: // 33 C0 : xor eax, eax
case 0xC933: // 33 C9 : xor ecx, ecx
case 0xD233: // 33 D2 : xor edx, edx
case 0xFF33: // 33 FF : xor edi, edi
case 0x9066: // 66 90 : xchg %ax,%ax (Two-byte NOP)
case 0xDB84: // 84 DB : test bl,bl
case 0xC084: // 84 C0 : test al,al
@@ -764,6 +765,7 @@ static size_t GetInstructionSize(uptr address, size_t* rel_offset = nullptr) {
switch (0x00FFFFFF & *(u32 *)address) {
case 0x10b70f: // 0f b7 10 : movzx edx, WORD PTR [rax]
case 0x02b70f: // 0f b7 02 : movzx eax, WORD PTR [rdx]
case 0xc00b4d: // 4d 0b c0 : or r8, r8
case 0xc03345: // 45 33 c0 : xor r8d, r8d
case 0xc08548: // 48 85 c0 : test rax, rax
@@ -799,6 +801,7 @@ static size_t GetInstructionSize(uptr address, size_t* rel_offset = nullptr) {
case 0xc9854d: // 4d 85 c9 : test r9, r9
case 0xc98b4c: // 4c 8b c9 : mov r9, rcx
case 0xd12948: // 48 29 d1 : sub rcx, rdx
case 0xc22b4c: // 4c 2b c2 : sub r8, rdx
case 0xca2b48: // 48 2b ca : sub rcx, rdx
case 0xca3b48: // 48 3b ca : cmp rcx, rdx
case 0xd12b48: // 48 2b d1 : sub rdx, rcx
@@ -813,6 +816,7 @@ static size_t GetInstructionSize(uptr address, size_t* rel_offset = nullptr) {
case 0xd9f748: // 48 f7 d9 : neg rcx
case 0xc03145: // 45 31 c0 : xor r8d,r8d
case 0xc93145: // 45 31 c9 : xor r9d,r9d
case 0xd23345: // 45 33 d2 : xor r10d, r10d
case 0xdb3345: // 45 33 db : xor r11d, r11d
case 0xc08445: // 45 84 c0 : test r8b,r8b
case 0xd28445: // 45 84 d2 : test r10b,r10b
@@ -288,6 +288,7 @@ class SizeClassAllocator32 {
uptr ComputeRegionId(uptr mem) const {
if (SANITIZER_SIGN_EXTENDED_ADDRESSES)
mem &= (kSpaceSize - 1);
mem -= kSpaceBeg;
const uptr res = mem >> kRegionSizeLog;
CHECK_LT(res, kNumPossibleRegions);
return res;
@@ -113,6 +113,24 @@ class SizeClassAllocator64 {
// ~(uptr)0.
void Init(s32 release_to_os_interval_ms, uptr heap_start = 0) {
uptr TotalSpaceSize = kSpaceSize + AdditionalSize();
uptr MaxAddr = GetMaxUserVirtualAddress();
// VReport does not call the sanitizer allocator.
VReport(3, "Max user virtual address: 0x%zx\n", MaxAddr);
VReport(3, "Total space size for primary allocator: 0x%zx\n",
TotalSpaceSize);
// TODO: revise the check if we ever configure sanitizers to deliberately
// map beyond the 2**48 barrier (note that Linux pretends the VMA is
// limited to 48-bit for backwards compatibility, but allows apps to
// explicitly specify an address beyond that).
if (heap_start + TotalSpaceSize >= MaxAddr) {
// We can't easily adjust the requested heap size, because kSpaceSize is
// const (for optimization) and used throughout the code.
VReport(0, "Error: heap size %zx exceeds max user virtual address %zx\n",
TotalSpaceSize, MaxAddr);
VReport(
0, "Try using a kernel that allows a larger virtual address space\n");
}
PremappedHeap = heap_start != 0;
if (PremappedHeap) {
CHECK(!kUsingConstantSpaceBeg);
+15 -2
View File
@@ -78,8 +78,8 @@ uptr GetMmapGranularity();
uptr GetMaxVirtualAddress();
uptr GetMaxUserVirtualAddress();
// Threads
tid_t GetTid();
int TgKill(pid_t pid, tid_t tid, int sig);
ThreadID GetTid();
int TgKill(pid_t pid, ThreadID tid, int sig);
uptr GetThreadSelf();
void GetThreadStackTopAndBottom(bool at_initialization, uptr *stack_top,
uptr *stack_bottom);
@@ -390,6 +390,9 @@ void ReportDeadlySignal(const SignalContext &sig, u32 tid,
void SetAlternateSignalStack();
void UnsetAlternateSignalStack();
bool IsSignalHandlerFromSanitizer(int signum);
bool SetSignalHandlerFromSanitizer(int signum, bool new_state);
// Construct a one-line string:
// SUMMARY: SanitizerToolName: error_message
// and pass it to __sanitizer_report_error_summary.
@@ -484,6 +487,13 @@ inline uptr Log2(uptr x) {
return LeastSignificantSetBitIndex(x);
}
inline bool IntervalsAreSeparate(uptr start1, uptr end1, uptr start2,
uptr end2) {
CHECK_LE(start1, end1);
CHECK_LE(start2, end2);
return (end1 < start2) || (end2 < start1);
}
// Don't use std::min, std::max or std::swap, to minimize dependency
// on libstdc++.
template <class T>
@@ -734,6 +744,7 @@ enum ModuleArch {
kModuleArchARMV7S,
kModuleArchARMV7K,
kModuleArchARM64,
kModuleArchARM64E,
kModuleArchLoongArch64,
kModuleArchRISCV64,
kModuleArchHexagon
@@ -807,6 +818,8 @@ inline const char *ModuleArchToString(ModuleArch arch) {
return "armv7k";
case kModuleArchARM64:
return "arm64";
case kModuleArchARM64E:
return "arm64e";
case kModuleArchLoongArch64:
return "loongarch64";
case kModuleArchRISCV64:
@@ -1285,8 +1285,34 @@ INTERCEPTOR(int, puts, char *s) {
#endif
#if SANITIZER_INTERCEPT_PRCTL
INTERCEPTOR(int, prctl, int option, unsigned long arg2, unsigned long arg3,
unsigned long arg4, unsigned long arg5) {
# if defined(__aarch64__)
// https://llvm.org/docs/PointerAuth.html
// AArch64 is currently the only architecture with full PAC support.
// Avoid adding PAC instructions to prevent crashes caused by
// prctl(PR_PAC_RESET_KEYS, ...). Since PR_PAC_RESET_KEYS resets the
// authentication key, using the old key afterward will lead to a crash.
# if defined(__ARM_FEATURE_BTI_DEFAULT)
# define BRANCH_PROTECTION_ATTRIBUTE \
__attribute__((target("branch-protection=bti")))
# else
# define BRANCH_PROTECTION_ATTRIBUTE \
__attribute__((target("branch-protection=none")))
# endif
# define PRCTL_INTERCEPTOR(ret_type, func, ...) \
DEFINE_REAL(ret_type, func, __VA_ARGS__) \
DECLARE_WRAPPER(ret_type, func, __VA_ARGS__) \
extern "C" INTERCEPTOR_ATTRIBUTE BRANCH_PROTECTION_ATTRIBUTE ret_type \
WRAP(func)(__VA_ARGS__)
# else
# define PRCTL_INTERCEPTOR INTERCEPTOR
# endif
PRCTL_INTERCEPTOR(int, prctl, int option, unsigned long arg2,
unsigned long arg3, unsigned long arg4, unsigned long arg5) {
void *ctx;
COMMON_INTERCEPTOR_ENTER(ctx, prctl, option, arg2, arg3, arg4, arg5);
static const int PR_SET_NAME = 15;
@@ -1300,7 +1326,7 @@ INTERCEPTOR(int, prctl, int option, unsigned long arg2, unsigned long arg3,
static const int PR_SET_SECCOMP = 22;
static const int SECCOMP_MODE_FILTER = 2;
# endif
if (option == PR_SET_VMA && arg2 == 0UL) {
if (option == PR_SET_VMA && arg2 == 0UL && arg5 != 0UL) {
char *name = (char *)arg5;
COMMON_INTERCEPTOR_READ_RANGE(ctx, name, internal_strlen(name) + 1);
}
@@ -1326,7 +1352,7 @@ INTERCEPTOR(int, prctl, int option, unsigned long arg2, unsigned long arg3,
}
return res;
}
#define INIT_PRCTL COMMON_INTERCEPT_FUNCTION(prctl)
# define INIT_PRCTL COMMON_INTERCEPT_FUNCTION(prctl)
#else
#define INIT_PRCTL
#endif // SANITIZER_INTERCEPT_PRCTL
@@ -344,12 +344,16 @@ static void ioctl_table_fill() {
_(SOUND_PCM_WRITE_CHANNELS, WRITE, sizeof(int));
_(SOUND_PCM_WRITE_FILTER, WRITE, sizeof(int));
_(TCFLSH, NONE, 0);
# if SANITIZER_TERMIOS_IOCTL_CONSTANTS
_(TCGETS, WRITE, struct_termios_sz);
# endif
_(TCSBRK, NONE, 0);
_(TCSBRKP, NONE, 0);
# if SANITIZER_TERMIOS_IOCTL_CONSTANTS
_(TCSETS, READ, struct_termios_sz);
_(TCSETSF, READ, struct_termios_sz);
_(TCSETSW, READ, struct_termios_sz);
# endif
_(TCXONC, NONE, 0);
_(TIOCGLCKTRMIOS, WRITE, struct_termios_sz);
_(TIOCGSOFTCAR, WRITE, sizeof(int));
@@ -5,6 +5,7 @@
ASM_HIDDEN(COMMON_INTERCEPTOR_SPILL_AREA)
TEXT_SECTION
.comm _ZN14__interception10real_vforkE,8,8
.globl ASM_WRAPPER_NAME(vfork)
ASM_TYPE_FUNCTION(ASM_WRAPPER_NAME(vfork))
@@ -43,6 +44,6 @@ ASM_SIZE(vfork)
ASM_INTERCEPTOR_TRAMPOLINE(vfork)
ASM_TRAMPOLINE_ALIAS(vfork, vfork)
GNU_PROPERTY_BTI_PAC
GNU_PROPERTY_BTI_PAC_GCS
#endif
@@ -2,6 +2,8 @@
#include "sanitizer_common/sanitizer_asm.h"
.att_syntax
.comm _ZN14__interception10real_vforkE,4,4
.globl ASM_WRAPPER_NAME(vfork)
ASM_TYPE_FUNCTION(ASM_WRAPPER_NAME(vfork))
@@ -2,6 +2,8 @@
#include "sanitizer_common/sanitizer_asm.h"
.att_syntax
.comm _ZN14__interception10real_vforkE,8,8
.globl ASM_WRAPPER_NAME(vfork)
ASM_TYPE_FUNCTION(ASM_WRAPPER_NAME(vfork))
@@ -143,6 +143,12 @@ struct sanitizer_kernel_sockaddr {
char sa_data[14];
};
struct sanitizer_kernel_open_how {
u64 flags;
u64 mode;
u64 resolve;
};
// Real sigset size is always passed as a syscall argument.
// Declare it "void" to catch sizeof(kernel_sigset_t).
typedef void kernel_sigset_t;
@@ -2843,6 +2849,18 @@ PRE_SYSCALL(openat)(long dfd, const void *filename, long flags, long mode) {
POST_SYSCALL(openat)
(long res, long dfd, const void *filename, long flags, long mode) {}
PRE_SYSCALL(openat2)(long dfd, const void* filename,
const sanitizer_kernel_open_how* how, uptr howlen) {
if (filename)
PRE_READ(filename, __sanitizer::internal_strlen((const char*)filename) + 1);
if (how)
PRE_READ(how, howlen);
}
POST_SYSCALL(openat2)(long res, long dfd, const void* filename,
const sanitizer_kernel_open_how* how, uptr howlen) {}
PRE_SYSCALL(newfstatat)
(long dfd, const void *filename, void *statbuf, long flag) {
if (filename)
@@ -1,43 +0,0 @@
//===-- sanitizer_coverage_interface.inc ----------------------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
// Sanitizer Coverage interface list.
//===----------------------------------------------------------------------===//
INTERFACE_FUNCTION(__sanitizer_cov_dump)
INTERFACE_FUNCTION(__sanitizer_cov_reset)
INTERFACE_FUNCTION(__sanitizer_dump_coverage)
INTERFACE_FUNCTION(__sanitizer_dump_trace_pc_guard_coverage)
INTERFACE_WEAK_FUNCTION(__sancov_default_options)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_cmp)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_cmp1)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_cmp2)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_cmp4)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_cmp8)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_const_cmp1)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_const_cmp2)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_const_cmp4)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_const_cmp8)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_div4)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_div8)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_gep)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_pc_guard)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_pc_guard_init)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_pc_indir)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_load1)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_load2)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_load4)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_load8)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_load16)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_store1)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_store2)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_store4)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_store8)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_store16)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_trace_switch)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_8bit_counters_init)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_bool_flag_init)
INTERFACE_WEAK_FUNCTION(__sanitizer_cov_pcs_init)
+40 -12
View File
@@ -36,9 +36,17 @@ void RawWrite(const char *buffer) {
void ReportFile::ReopenIfNecessary() {
mu->CheckLocked();
if (fd == kStdoutFd || fd == kStderrFd) return;
uptr pid = internal_getpid();
if (fallbackToStderrActive && fd_pid != pid) {
// If fallbackToStderrActive is set then we fellback to stderr. If this is a
// new process, mark fd as invalid so we attempt to open again.
CHECK_EQ(fd, kStderrFd);
fd = kInvalidFd;
fallbackToStderrActive = false;
}
if (fd == kStdoutFd || fd == kStderrFd)
return;
// If in tracer, use the parent's file.
if (pid == stoptheworld_tracer_pid)
pid = stoptheworld_tracer_ppid;
@@ -48,8 +56,7 @@ void ReportFile::ReopenIfNecessary() {
// process, close it now.
if (fd_pid == pid)
return;
else
CloseFile(fd);
CloseFile(fd);
}
const char *exe_name = GetProcessName();
@@ -65,18 +72,24 @@ void ReportFile::ReopenIfNecessary() {
error_t err;
fd = OpenFile(full_path, WrOnly, &err);
if (fd == kInvalidFd) {
const char *ErrorMsgPrefix = "ERROR: Can't open file: ";
bool fallback = common_flags()->log_fallback_to_stderr;
const char *ErrorMsgPrefix =
fallback ? "WARNING: Can't open file, falling back to stderr: "
: "ERROR: Can't open file: ";
WriteToFile(kStderrFd, ErrorMsgPrefix, internal_strlen(ErrorMsgPrefix));
WriteToFile(kStderrFd, full_path, internal_strlen(full_path));
char errmsg[100];
internal_snprintf(errmsg, sizeof(errmsg), " (reason: %d)\n", err);
WriteToFile(kStderrFd, errmsg, internal_strlen(errmsg));
Die();
if (!fallback)
Die();
fallbackToStderrActive = true;
fd = kStderrFd;
}
fd_pid = pid;
}
static void RecursiveCreateParentDirs(char *path) {
static void RecursiveCreateParentDirs(char *path, fd_t &fd) {
if (path[0] == '\0')
return;
for (int i = 1; path[i] != '\0'; ++i) {
@@ -85,12 +98,19 @@ static void RecursiveCreateParentDirs(char *path) {
continue;
path[i] = '\0';
if (!DirExists(path) && !CreateDir(path)) {
const char *ErrorMsgPrefix = "ERROR: Can't create directory: ";
bool fallback = common_flags()->log_fallback_to_stderr;
const char *ErrorMsgPrefix =
fallback ? "WARNING: Can't create directory, falling back to stderr: "
: "ERROR: Can't create directory: ";
WriteToFile(kStderrFd, ErrorMsgPrefix, internal_strlen(ErrorMsgPrefix));
WriteToFile(kStderrFd, path, internal_strlen(path));
const char *ErrorMsgSuffix = "\n";
WriteToFile(kStderrFd, ErrorMsgSuffix, internal_strlen(ErrorMsgSuffix));
Die();
if (!fallback)
Die();
path[i] = save;
fd = kStderrFd;
return;
}
path[i] = save;
}
@@ -108,6 +128,9 @@ static void ParseAndSetPath(const char *pattern, char *dest,
CHECK(dest);
CHECK_GE(dest_size, 1);
dest[0] = '\0';
// Return empty string if empty string was passed
if (internal_strlen(pattern) == 0)
return;
uptr next_substr_start_idx = 0;
for (uptr i = 0; i < internal_strlen(pattern) - 1; i++) {
if (pattern[i] != '%')
@@ -161,12 +184,17 @@ void ReportFile::SetReportPath(const char *path) {
if (path) {
uptr len = internal_strlen(path);
if (len > sizeof(path_prefix) - 100) {
const char *message = "ERROR: Path is too long: ";
bool fallback = common_flags()->log_fallback_to_stderr;
const char *message =
fallback ? "WARNING: Path is too long, falling back to stderr: "
: "ERROR: Path is too long: ";
WriteToFile(kStderrFd, message, internal_strlen(message));
WriteToFile(kStderrFd, path, 8);
message = "...\n";
WriteToFile(kStderrFd, message, internal_strlen(message));
Die();
if (!fallback)
Die();
path = "stderr";
}
}
@@ -180,7 +208,7 @@ void ReportFile::SetReportPath(const char *path) {
fd = kStdoutFd;
} else {
ParseAndSetPath(path, path_prefix, kMaxPathLength);
RecursiveCreateParentDirs(path_prefix);
RecursiveCreateParentDirs(path_prefix, fd);
}
}
+3
View File
@@ -43,6 +43,9 @@ struct ReportFile {
// PID of the process that opened fd. If a fork() occurs,
// the PID of child will be different from fd_pid.
uptr fd_pid;
// Set to true if the last attempt to open the logfile failed, perhaps due to
// permission errors
bool fallbackToStderrActive = false;
private:
void ReopenIfNecessary();
+7
View File
@@ -65,6 +65,8 @@ COMMON_FLAG(
bool, log_to_syslog, (bool)SANITIZER_ANDROID || (bool)SANITIZER_APPLE,
"Write all sanitizer output to syslog in addition to other means of "
"logging.")
COMMON_FLAG(bool, log_fallback_to_stderr, false,
"When set, fallback to stderr if we are unable to open log path.")
COMMON_FLAG(
int, verbosity, 0,
"Verbosity level (0 - silent, 1 - a bit of output, 2+ - more output).")
@@ -111,6 +113,11 @@ COMMON_FLAG(HandleSignalMode, handle_sigfpe, kHandleSignalYes,
COMMON_FLAG(bool, allow_user_segv_handler, true,
"Deprecated. True has no effect, use handle_sigbus=1. If false, "
"handle_*=1 will be upgraded to handle_*=2.")
COMMON_FLAG(bool, cloak_sanitizer_signal_handlers, false,
"If set, signal/sigaction will pretend that sanitizers did not "
"preinstall any signal handlers. If the user subsequently installs "
"a signal handler, this will disable cloaking for the respective "
"signal.")
COMMON_FLAG(bool, use_sigaltstack, true,
"If set, uses alternate stack for signal handling.")
COMMON_FLAG(bool, detect_deadlocks, true,
+30 -3
View File
@@ -14,6 +14,7 @@
#include "sanitizer_fuchsia.h"
#if SANITIZER_FUCHSIA
# include <limits.h>
# include <pthread.h>
# include <stdlib.h>
# include <unistd.h>
@@ -68,7 +69,7 @@ int internal_dlinfo(void *handle, int request, void *p) { UNIMPLEMENTED(); }
uptr GetThreadSelf() { return reinterpret_cast<uptr>(thrd_current()); }
tid_t GetTid() { return GetThreadSelf(); }
ThreadID GetTid() { return GetThreadSelf(); }
void Abort() { abort(); }
@@ -117,11 +118,37 @@ uptr GetMmapGranularity() { return _zx_system_get_page_size(); }
sanitizer_shadow_bounds_t ShadowBounds;
// Any sanitizer that utilizes shadow should explicitly call whenever it's
// appropriate for that sanitizer to reference shadow bounds. For ASan, this is
// done in `InitializeShadowMemory` and for HWASan, this is done in
// `InitShadow`.
void InitShadowBounds() { ShadowBounds = __sanitizer_shadow_bounds(); }
// TODO(leonardchan): It's not immediately clear from a user perspective if
// `GetMaxUserVirtualAddress` should be called exatly once on runtime startup
// or can be called multiple times. Currently it looks like most instances of
// `GetMaxUserVirtualAddress` are meant to be called once, but if someone
// decides to call this multiple times in the future, we should have a separate
// function that's ok to call multiple times. Ideally we would just invoke this
// syscall once. Also for Fuchsia, this syscall technically gets invoked twice
// since `__sanitizer_shadow_bounds` also invokes this syscall under the hood.
uptr GetMaxUserVirtualAddress() {
InitShadowBounds();
return ShadowBounds.memory_limit - 1;
zx_info_vmar_t info;
zx_status_t status = _zx_object_get_info(_zx_vmar_root_self(), ZX_INFO_VMAR,
&info, sizeof(info), NULL, NULL);
CHECK_EQ(status, ZX_OK);
// Find the top of the accessible address space.
uintptr_t top = info.base + info.len;
// Round it up to a power-of-two size. There may be some pages at
// the top that can't actually be mapped, but for purposes of the
// the shadow, we'll pretend they could be.
int bit = (sizeof(uintptr_t) * CHAR_BIT) - __builtin_clzl(top);
if (top != (uintptr_t)1 << bit)
top = (uintptr_t)1 << (bit + 1);
return top - 1;
}
uptr GetMaxVirtualAddress() { return GetMaxUserVirtualAddress(); }
+2 -2
View File
@@ -231,12 +231,12 @@ uptr internal_execve(const char *filename, char *const argv[],
}
# if 0
tid_t GetTid() {
ThreadID GetTid() {
DEFINE__REAL(int, _lwp_self);
return _REAL(_lwp_self);
}
int TgKill(pid_t pid, tid_t tid, int sig) {
int TgKill(pid_t pid, ThreadID tid, int sig) {
DEFINE__REAL(int, _lwp_kill, int a, int b);
(void)pid;
return _REAL(_lwp_kill, tid, sig);
+1 -1
View File
@@ -209,7 +209,7 @@ typedef long ssize;
typedef sptr ssize;
#endif
typedef u64 tid_t;
typedef u64 ThreadID;
// ----------- ATTENTION -------------
// This header should NOT include any other headers to avoid portability issues.
+8
View File
@@ -190,6 +190,14 @@ uptr internal_strlcat(char *dst, const char *src, uptr maxlen) {
return dstlen + srclen;
}
char* internal_strcat(char* dst, const char* src) {
uptr len = internal_strlen(dst);
uptr i;
for (i = 0; src[i]; i++) dst[len + i] = src[i];
dst[len + i] = 0;
return dst;
}
char *internal_strncat(char *dst, const char *src, uptr n) {
uptr len = internal_strlen(dst);
uptr i;
+1
View File
@@ -59,6 +59,7 @@ char *internal_strdup(const char *s);
uptr internal_strlen(const char *s);
uptr internal_strlcat(char *dst, const char *src, uptr maxlen);
char *internal_strncat(char *dst, const char *src, uptr n);
char* internal_strcat(char* dst, const char* src);
int internal_strncmp(const char *s1, const char *s2, uptr n);
uptr internal_strlcpy(char *dst, const char *src, uptr maxlen);
char *internal_strncpy(char *dst, const char *src, uptr n);
+15 -7
View File
@@ -635,7 +635,7 @@ bool DirExists(const char *path) {
}
# if !SANITIZER_NETBSD
tid_t GetTid() {
ThreadID GetTid() {
# if SANITIZER_FREEBSD
long Tid;
thr_self(&Tid);
@@ -649,7 +649,7 @@ tid_t GetTid() {
# endif
}
int TgKill(pid_t pid, tid_t tid, int sig) {
int TgKill(pid_t pid, ThreadID tid, int sig) {
# if SANITIZER_LINUX
return internal_syscall(SYSCALL(tgkill), pid, tid, sig);
# elif SANITIZER_FREEBSD
@@ -1091,7 +1091,7 @@ ThreadLister::ThreadLister(pid_t pid) : buffer_(4096) {
}
ThreadLister::Result ThreadLister::ListThreads(
InternalMmapVector<tid_t> *threads) {
InternalMmapVector<ThreadID> *threads) {
int descriptor = internal_open(task_path_.data(), O_RDONLY | O_DIRECTORY);
if (internal_iserror(descriptor)) {
Report("Can't open %s for reading.\n", task_path_.data());
@@ -1146,7 +1146,7 @@ ThreadLister::Result ThreadLister::ListThreads(
}
}
const char *ThreadLister::LoadStatus(tid_t tid) {
const char *ThreadLister::LoadStatus(ThreadID tid) {
status_path_.clear();
status_path_.AppendF("%s/%llu/status", task_path_.data(), tid);
auto cleanup = at_scope_exit([&] {
@@ -1159,7 +1159,7 @@ const char *ThreadLister::LoadStatus(tid_t tid) {
return buffer_.data();
}
bool ThreadLister::IsAlive(tid_t tid) {
bool ThreadLister::IsAlive(ThreadID tid) {
// /proc/%d/task/%d/status uses same call to detect alive threads as
// proc_task_readdir. See task_state implementation in Linux.
static const char kPrefix[] = "\nPPid:";
@@ -1289,7 +1289,7 @@ uptr GetPageSize() {
uptr ReadBinaryName(/*out*/ char *buf, uptr buf_len) {
# if SANITIZER_HAIKU
int cookie = 0;
int32 cookie = 0;
image_info info;
const char *argv0 = "<UNKNOWN>";
while (get_next_image_info(B_CURRENT_TEAM, &cookie, &info) == B_OK) {
@@ -1989,7 +1989,10 @@ SignalContext::WriteFlag SignalContext::GetWriteFlag() const {
# elif SANITIZER_NETBSD
uptr err = ucontext->uc_mcontext.__gregs[_REG_ERR];
# elif SANITIZER_HAIKU
uptr err = ucontext->uc_mcontext.r13;
uptr err = 0; // FIXME: ucontext->uc_mcontext.r13;
// The err register was added on the main branch and not
// available with the current release. To be reverted later.
// https://github.com/haiku/haiku/commit/11adda21aa4e6b24f71a496868a44d7607bc3764
# elif SANITIZER_SOLARIS && defined(__i386__)
const int Err = 13;
uptr err = ucontext->uc_mcontext.gregs[Err];
@@ -2619,6 +2622,11 @@ static void GetPcSpBp(void *context, uptr *pc, uptr *sp, uptr *bp) {
*pc = ucontext->uc_mcontext.mc_eip;
*bp = ucontext->uc_mcontext.mc_ebp;
*sp = ucontext->uc_mcontext.mc_esp;
# elif SANITIZER_HAIKU
ucontext_t *ucontext = (ucontext_t *)context;
*pc = ucontext->uc_mcontext.eip;
*bp = ucontext->uc_mcontext.ebp;
*sp = ucontext->uc_mcontext.esp;
# else
ucontext_t *ucontext = (ucontext_t *)context;
# if SANITIZER_SOLARIS
+3 -3
View File
@@ -108,11 +108,11 @@ class ThreadLister {
Incomplete,
Ok,
};
Result ListThreads(InternalMmapVector<tid_t> *threads);
const char *LoadStatus(tid_t tid);
Result ListThreads(InternalMmapVector<ThreadID> *threads);
const char *LoadStatus(ThreadID tid);
private:
bool IsAlive(tid_t tid);
bool IsAlive(ThreadID tid);
InternalScopedString task_path_;
InternalScopedString status_path_;
@@ -29,6 +29,7 @@
# include "sanitizer_solaris.h"
# if SANITIZER_HAIKU
# define _GNU_SOURCE
# define _DEFAULT_SOURCE
# endif
+255 -104
View File
@@ -22,6 +22,11 @@
# endif
# include <stdio.h>
// Start searching for available memory region past PAGEZERO, which is
// 4KB on 32-bit and 4GB on 64-bit.
# define GAP_SEARCH_START_ADDRESS \
((SANITIZER_WORDSIZE == 32) ? 0x000000001000 : 0x000100000000)
# include "sanitizer_common.h"
# include "sanitizer_file.h"
# include "sanitizer_flags.h"
@@ -58,9 +63,11 @@ extern char ***_NSGetArgv(void);
# include <dlfcn.h> // for dladdr()
# include <errno.h>
# include <fcntl.h>
# include <inttypes.h>
# include <libkern/OSAtomic.h>
# include <mach-o/dyld.h>
# include <mach/mach.h>
# include <mach/mach_error.h>
# include <mach/mach_time.h>
# include <mach/vm_statistics.h>
# include <malloc/malloc.h>
@@ -96,8 +103,16 @@ extern "C" {
natural_t *nesting_depth,
vm_region_recurse_info_t info,
mach_msg_type_number_t *infoCnt);
extern const void* _dyld_get_shared_cache_range(size_t* length);
}
# if !SANITIZER_GO
// Weak symbol no-op when TSan is not linked
SANITIZER_WEAK_ATTRIBUTE extern void __tsan_set_in_internal_write_call(
bool value) {}
# endif
namespace __sanitizer {
#include "sanitizer_syscall_generic.inc"
@@ -168,7 +183,15 @@ uptr internal_read(fd_t fd, void *buf, uptr count) {
}
uptr internal_write(fd_t fd, const void *buf, uptr count) {
# if SANITIZER_GO
return write(fd, buf, count);
# else
// We need to disable interceptors when writing in TSan
__tsan_set_in_internal_write_call(true);
uptr res = write(fd, buf, count);
__tsan_set_in_internal_write_call(false);
return res;
# endif
}
uptr internal_stat(const char *path, void *buf) {
@@ -258,53 +281,43 @@ int internal_sysctlbyname(const char *sname, void *oldp, uptr *oldlenp,
(size_t)newlen);
}
static fd_t internal_spawn_impl(const char *argv[], const char *envp[],
pid_t *pid) {
fd_t primary_fd = kInvalidFd;
fd_t secondary_fd = kInvalidFd;
bool internal_spawn(const char* argv[], const char* envp[], pid_t* pid,
fd_t fd_stdin, fd_t fd_stdout) {
// NOTE: Caller ensures that fd_stdin and fd_stdout are not 0, 1, or 2, since
// this can break communication.
//
// NOTE: Caller is responsible for closing fd_stdin after the process has
// died.
int res;
auto fd_closer = at_scope_exit([&] {
internal_close(primary_fd);
internal_close(secondary_fd);
// NOTE: We intentionally do not close fd_stdin since this can
// cause us to receive a fatal SIGPIPE if the process dies.
internal_close(fd_stdout);
});
// We need a new pseudoterminal to avoid buffering problems. The 'atos' tool
// in particular detects when it's talking to a pipe and forgets to flush the
// output stream after sending a response.
primary_fd = posix_openpt(O_RDWR);
if (primary_fd == kInvalidFd)
return kInvalidFd;
int res = grantpt(primary_fd) || unlockpt(primary_fd);
if (res != 0) return kInvalidFd;
// Use TIOCPTYGNAME instead of ptsname() to avoid threading problems.
char secondary_pty_name[128];
res = ioctl(primary_fd, TIOCPTYGNAME, secondary_pty_name);
if (res == -1) return kInvalidFd;
secondary_fd = internal_open(secondary_pty_name, O_RDWR);
if (secondary_fd == kInvalidFd)
return kInvalidFd;
// File descriptor actions
posix_spawn_file_actions_t acts;
res = posix_spawn_file_actions_init(&acts);
if (res != 0) return kInvalidFd;
if (res != 0)
return false;
auto acts_cleanup = at_scope_exit([&] {
posix_spawn_file_actions_destroy(&acts);
});
res = posix_spawn_file_actions_adddup2(&acts, secondary_fd, STDIN_FILENO) ||
posix_spawn_file_actions_adddup2(&acts, secondary_fd, STDOUT_FILENO) ||
posix_spawn_file_actions_addclose(&acts, secondary_fd);
if (res != 0) return kInvalidFd;
res = posix_spawn_file_actions_adddup2(&acts, fd_stdin, STDIN_FILENO) ||
posix_spawn_file_actions_adddup2(&acts, fd_stdout, STDOUT_FILENO) ||
posix_spawn_file_actions_addclose(&acts, fd_stdin) ||
posix_spawn_file_actions_addclose(&acts, fd_stdout);
if (res != 0)
return false;
// Spawn attributes
posix_spawnattr_t attrs;
res = posix_spawnattr_init(&attrs);
if (res != 0) return kInvalidFd;
if (res != 0)
return false;
auto attrs_cleanup = at_scope_exit([&] {
posix_spawnattr_destroy(&attrs);
@@ -313,50 +326,17 @@ static fd_t internal_spawn_impl(const char *argv[], const char *envp[],
// In the spawned process, close all file descriptors that are not explicitly
// described by the file actions object. This is Darwin-specific extension.
res = posix_spawnattr_setflags(&attrs, POSIX_SPAWN_CLOEXEC_DEFAULT);
if (res != 0) return kInvalidFd;
if (res != 0)
return false;
// posix_spawn
char **argv_casted = const_cast<char **>(argv);
char **envp_casted = const_cast<char **>(envp);
res = posix_spawn(pid, argv[0], &acts, &attrs, argv_casted, envp_casted);
if (res != 0) return kInvalidFd;
if (res != 0)
return false;
// Disable echo in the new terminal, disable CR.
struct termios termflags;
tcgetattr(primary_fd, &termflags);
termflags.c_oflag &= ~ONLCR;
termflags.c_lflag &= ~ECHO;
tcsetattr(primary_fd, TCSANOW, &termflags);
// On success, do not close primary_fd on scope exit.
fd_t fd = primary_fd;
primary_fd = kInvalidFd;
return fd;
}
fd_t internal_spawn(const char *argv[], const char *envp[], pid_t *pid) {
// The client program may close its stdin and/or stdout and/or stderr thus
// allowing open/posix_openpt to reuse file descriptors 0, 1 or 2. In this
// case the communication is broken if either the parent or the child tries to
// close or duplicate these descriptors. We temporarily reserve these
// descriptors here to prevent this.
fd_t low_fds[3];
size_t count = 0;
for (; count < 3; count++) {
low_fds[count] = posix_openpt(O_RDWR);
if (low_fds[count] >= STDERR_FILENO)
break;
}
fd_t fd = internal_spawn_impl(argv, envp, pid);
for (; count > 0; count--) {
internal_close(low_fds[count]);
}
return fd;
return true;
}
uptr internal_rename(const char *oldpath, const char *newpath) {
@@ -394,8 +374,8 @@ bool DirExists(const char *path) {
return S_ISDIR(st.st_mode);
}
tid_t GetTid() {
tid_t tid;
ThreadID GetTid() {
ThreadID tid;
pthread_threadid_np(nullptr, &tid);
return tid;
}
@@ -769,11 +749,17 @@ void internal_join_thread(void *th) { pthread_join((pthread_t)th, 0); }
static Mutex syslog_lock;
# endif
# if SANITIZER_DRIVERKIT
# define SANITIZER_OS_LOG os_log
# else
# define SANITIZER_OS_LOG os_log_error
# endif
void WriteOneLineToSyslog(const char *s) {
#if !SANITIZER_GO
syslog_lock.CheckLocked();
if (GetMacosAlignedVersion() >= MacosVersion(10, 12)) {
os_log_error(OS_LOG_DEFAULT, "%{public}s", s);
SANITIZER_OS_LOG(OS_LOG_DEFAULT, "%{public}s", s);
} else {
#pragma clang diagnostic push
// as_log is deprecated.
@@ -837,22 +823,22 @@ void LogMessageOnPrintf(const char *str) {
void LogFullErrorReport(const char *buffer) {
# if !SANITIZER_GO
// Log with os_log_error. This will make it into the crash log.
// When logging with os_log_error this will make it into the crash log.
if (internal_strncmp(SanitizerToolName, "AddressSanitizer",
sizeof("AddressSanitizer") - 1) == 0)
os_log_error(OS_LOG_DEFAULT, "Address Sanitizer reported a failure.");
SANITIZER_OS_LOG(OS_LOG_DEFAULT, "Address Sanitizer reported a failure.");
else if (internal_strncmp(SanitizerToolName, "UndefinedBehaviorSanitizer",
sizeof("UndefinedBehaviorSanitizer") - 1) == 0)
os_log_error(OS_LOG_DEFAULT,
"Undefined Behavior Sanitizer reported a failure.");
SANITIZER_OS_LOG(OS_LOG_DEFAULT,
"Undefined Behavior Sanitizer reported a failure.");
else if (internal_strncmp(SanitizerToolName, "ThreadSanitizer",
sizeof("ThreadSanitizer") - 1) == 0)
os_log_error(OS_LOG_DEFAULT, "Thread Sanitizer reported a failure.");
SANITIZER_OS_LOG(OS_LOG_DEFAULT, "Thread Sanitizer reported a failure.");
else
os_log_error(OS_LOG_DEFAULT, "Sanitizer tool reported a failure.");
SANITIZER_OS_LOG(OS_LOG_DEFAULT, "Sanitizer tool reported a failure.");
if (common_flags()->log_to_syslog)
os_log_error(OS_LOG_DEFAULT, "Consult syslog for more information.");
SANITIZER_OS_LOG(OS_LOG_DEFAULT, "Consult syslog for more information.");
// Log to syslog.
// The logging on OS X may call pthread_create so we need the threading
@@ -933,7 +919,17 @@ static void DisableMmapExcGuardExceptions() {
RTLD_DEFAULT, "task_set_exc_guard_behavior");
if (set_behavior == nullptr) return;
const task_exc_guard_behavior_t task_exc_guard_none = 0;
set_behavior(mach_task_self(), task_exc_guard_none);
kern_return_t res = set_behavior(mach_task_self(), task_exc_guard_none);
if (res != KERN_SUCCESS) {
Report(
"WARN: task_set_exc_guard_behavior returned %d (%s), "
"mmap may fail unexpectedly.\n",
res, mach_error_string(res));
if (res == KERN_DENIED)
Report(
"HINT: Check that task_set_exc_guard_behavior is allowed by "
"sandbox.\n");
}
}
static void VerifyInterceptorsWorking();
@@ -1100,6 +1096,67 @@ static void StripEnv() {
}
#endif // SANITIZER_GO
// Prints out a consolidated memory map: contiguous regions
// are merged together.
static void PrintVmmap() {
const mach_vm_address_t max_vm_address = GetMaxVirtualAddress() + 1;
mach_vm_address_t address = GAP_SEARCH_START_ADDRESS;
kern_return_t kr = KERN_SUCCESS;
Report("Memory map:\n");
mach_vm_address_t last = 0;
mach_vm_address_t lastsz = 0;
while (1) {
mach_vm_size_t vmsize = 0;
natural_t depth = 0;
vm_region_submap_short_info_data_64_t vminfo;
mach_msg_type_number_t count = VM_REGION_SUBMAP_SHORT_INFO_COUNT_64;
kr = mach_vm_region_recurse(mach_task_self(), &address, &vmsize, &depth,
(vm_region_info_t)&vminfo, &count);
if (kr == KERN_DENIED) {
Report(
"ERROR: mach_vm_region_recurse got KERN_DENIED when printing memory "
"map.\n");
Report(
"HINT: Check whether mach_vm_region_recurse is allowed by "
"sandbox.\n");
}
if (kr == KERN_SUCCESS && address < max_vm_address) {
if (last + lastsz == address) {
// This region is contiguous with the last; merge together.
lastsz += vmsize;
} else {
if (lastsz)
Printf("|| `[%p, %p]` || size=0x%016" PRIx64 " ||\n", (void*)last,
(void*)(last + lastsz), lastsz);
last = address;
lastsz = vmsize;
}
address += vmsize;
} else {
// We've reached the end of the memory map. Print the last remaining
// region, if there is one.
if (lastsz)
Printf("|| `[%p, %p]` || size=0x%016" PRIx64 " ||\n", (void*)last,
(void*)(last + lastsz), lastsz);
break;
}
}
}
static void ReportShadowAllocFail(uptr shadow_size_bytes, uptr alignment) {
Report(
"FATAL: Failed to allocate shadow memory. Tried to allocate %p bytes "
"(alignment=%p).\n",
(void*)shadow_size_bytes, (void*)alignment);
PrintVmmap();
}
char **GetArgv() {
return *_NSGetArgv();
}
@@ -1207,10 +1264,11 @@ uptr MapDynamicShadow(uptr shadow_size_bytes, uptr shadow_scale,
if (new_max_vm < max_occupied_addr) {
Report("Unable to find a memory range for dynamic shadow.\n");
Report(
"space_size = %p, largest_gap_found = %p, max_occupied_addr = %p, "
"new_max_vm = %p\n",
(void *)space_size, (void *)largest_gap_found,
(void *)max_occupied_addr, (void *)new_max_vm);
"\tspace_size = %p\n\tlargest_gap_found = %p\n\tmax_occupied_addr "
"= %p\n\tnew_max_vm = %p\n",
(void*)space_size, (void*)largest_gap_found, (void*)max_occupied_addr,
(void*)new_max_vm);
ReportShadowAllocFail(shadow_size_bytes, alignment);
CHECK(0 && "cannot place shadow");
}
RestrictMemoryToMaxAddress(new_max_vm);
@@ -1221,6 +1279,7 @@ uptr MapDynamicShadow(uptr shadow_size_bytes, uptr shadow_scale,
nullptr, nullptr);
if (shadow_start == 0) {
Report("Unable to find a memory range after restricting VM.\n");
ReportShadowAllocFail(shadow_size_bytes, alignment);
CHECK(0 && "cannot place shadow after restricting vm");
}
}
@@ -1229,6 +1288,25 @@ uptr MapDynamicShadow(uptr shadow_size_bytes, uptr shadow_scale,
return shadow_start;
}
// Returns a list of ranges which must be covered by shadow memory,
// and cannot overlap with any fixed mappings made by a sanitizer.
// This can ensure that the sanitizer runtime does not map over
// platform-reserved regions.
void GetAppReservedRanges(InternalMmapVector<ReservedRange>& ranges) {
ranges.clear();
# if SANITIZER_OSX
// On macOS, the first 512GB are platform-reserved (some of which
// may also be available to applications).
ranges.push_back({0x1000UL, 0x8000000000UL});
# endif
VReport(2, "App ranges:\n");
for (auto& [range_start, range_end] : ranges) {
VReport(2, " [%p, %p]\n", range_start, range_end);
}
}
uptr MapDynamicShadowAndAliases(uptr shadow_size, uptr alias_size,
uptr num_aliases, uptr ring_buffer_size) {
CHECK(false && "HWASan aliasing is unimplemented on Mac");
@@ -1236,40 +1314,61 @@ uptr MapDynamicShadowAndAliases(uptr shadow_size, uptr alias_size,
}
uptr FindAvailableMemoryRange(uptr size, uptr alignment, uptr left_padding,
uptr *largest_gap_found,
uptr *max_occupied_addr) {
typedef vm_region_submap_short_info_data_64_t RegionInfo;
enum { kRegionInfoSize = VM_REGION_SUBMAP_SHORT_INFO_COUNT_64 };
// Start searching for available memory region past PAGEZERO, which is
// 4KB on 32-bit and 4GB on 64-bit.
mach_vm_address_t start_address =
(SANITIZER_WORDSIZE == 32) ? 0x000000001000 : 0x000100000000;
uptr* largest_gap_found,
uptr* max_occupied_addr) {
const mach_vm_address_t max_vm_address = GetMaxVirtualAddress() + 1;
mach_vm_address_t address = start_address;
mach_vm_address_t free_begin = start_address;
mach_vm_address_t address = GAP_SEARCH_START_ADDRESS;
mach_vm_address_t free_begin = GAP_SEARCH_START_ADDRESS;
// Restrict the search to be after any reserved ranges
InternalMmapVector<ReservedRange> app_ranges;
GetAppReservedRanges(app_ranges);
for (auto& [range_start, range_end] : app_ranges) {
address = Max(address, (mach_vm_address_t)range_end);
free_begin = Max(free_begin, (mach_vm_address_t)range_end);
}
kern_return_t kr = KERN_SUCCESS;
if (largest_gap_found) *largest_gap_found = 0;
if (max_occupied_addr) *max_occupied_addr = 0;
while (kr == KERN_SUCCESS) {
mach_vm_size_t vmsize = 0;
natural_t depth = 0;
RegionInfo vminfo;
mach_msg_type_number_t count = kRegionInfoSize;
vm_region_submap_short_info_data_64_t vminfo;
mach_msg_type_number_t count = VM_REGION_SUBMAP_SHORT_INFO_COUNT_64;
kr = mach_vm_region_recurse(mach_task_self(), &address, &vmsize, &depth,
(vm_region_info_t)&vminfo, &count);
// There are cases where going beyond the processes' max vm does
// not return KERN_INVALID_ADDRESS so we check for going beyond that
// max address as well.
if (kr == KERN_INVALID_ADDRESS || address > max_vm_address) {
if (kr == KERN_SUCCESS) {
// There are cases where going beyond the processes' max vm does
// not return KERN_INVALID_ADDRESS so we check for going beyond that
// max address as well.
if (address > max_vm_address) {
address = max_vm_address;
kr = -1; // break after this iteration.
}
if (max_occupied_addr)
*max_occupied_addr = address + vmsize;
} else if (kr == KERN_INVALID_ADDRESS) {
// No more regions beyond "address", consider the gap at the end of VM.
address = max_vm_address;
vmsize = 0;
kr = -1; // break after this iteration.
// We will break after this iteration anyway since kr != KERN_SUCCESS
} else if (kr == KERN_DENIED) {
Report("ERROR: Unable to find a memory range for dynamic shadow.\n");
Report("HINT: Ensure mach_vm_region_recurse is allowed under sandbox.\n");
Die();
} else {
if (max_occupied_addr) *max_occupied_addr = address + vmsize;
Report(
"WARNING: mach_vm_region_recurse returned unexpected code %d (%s)\n",
kr, mach_error_string(kr));
DCHECK(false && "mach_vm_region_recurse returned unexpected code");
break; // address is not valid unless KERN_SUCCESS, therefore we must not
// use it.
}
if (free_begin != address) {
// We found a free region [free_begin..address-1].
uptr gap_start = RoundUpTo((uptr)free_begin + left_padding, alignment);
@@ -1292,6 +1391,58 @@ uptr FindAvailableMemoryRange(uptr size, uptr alignment, uptr left_padding,
return 0;
}
// This function (when used during initialization when there is
// only a single thread), can be used to verify that a range
// of memory hasn't already been mapped, and won't be mapped
// later in the shared cache.
//
// If the syscall mach_vm_region_recurse fails (due to sandbox),
// we assume that the memory is not mapped so that execution can continue.
//
// NOTE: range_end is inclusive
//
// WARNING: This function must NOT allocate memory, since it is
// used in InitializeShadowMemory between where we search for
// space for shadow and where we actually allocate it.
bool MemoryRangeIsAvailable(uptr range_start, uptr range_end) {
mach_vm_size_t vmsize = 0;
natural_t depth = 0;
vm_region_submap_short_info_data_64_t vminfo;
mach_msg_type_number_t count = VM_REGION_SUBMAP_SHORT_INFO_COUNT_64;
mach_vm_address_t address = range_start;
// First, check if the range is already mapped.
kern_return_t kr =
mach_vm_region_recurse(mach_task_self(), &address, &vmsize, &depth,
(vm_region_info_t)&vminfo, &count);
if (kr == KERN_DENIED) {
Report(
"WARN: mach_vm_region_recurse returned KERN_DENIED when checking "
"whether an address is mapped.\n");
Report("HINT: Is mach_vm_region_recurse allowed by sandbox?\n");
}
if (kr == KERN_SUCCESS && !IntervalsAreSeparate(address, address + vmsize - 1,
range_start, range_end)) {
// Overlaps with already-mapped memory
return false;
}
size_t cacheLength;
uptr cacheStart = (uptr)_dyld_get_shared_cache_range(&cacheLength);
if (cacheStart &&
!IntervalsAreSeparate(cacheStart, cacheStart + cacheLength - 1,
range_start, range_end)) {
// Overlaps with shared cache region
return false;
}
// We believe this address is available.
return true;
}
// FIXME implement on this platform.
void GetMemoryProfile(fill_profile_f cb, uptr *stats) {}
+5
View File
@@ -58,8 +58,13 @@ struct DarwinKernelVersion : VersionBase<DarwinKernelVersion> {
DarwinKernelVersion(u16 major, u16 minor) : VersionBase(major, minor) {}
};
struct ReservedRange {
uptr beg, end;
};
MacosVersion GetMacosAlignedVersion();
DarwinKernelVersion GetDarwinKernelVersion();
void GetAppReservedRanges(InternalMmapVector<ReservedRange>& ranges);
char **GetEnviron();
+2 -2
View File
@@ -229,12 +229,12 @@ uptr internal_execve(const char *filename, char *const argv[],
return _sys_execve(filename, argv, envp);
}
tid_t GetTid() {
ThreadID GetTid() {
DEFINE__REAL(int, _lwp_self);
return _REAL(_lwp_self);
}
int TgKill(pid_t pid, tid_t tid, int sig) {
int TgKill(pid_t pid, ThreadID tid, int sig) {
DEFINE__REAL(int, _lwp_kill, int a, int b);
(void)pid;
return _REAL(_lwp_kill, tid, sig);
+27 -1
View File
@@ -319,7 +319,11 @@
#endif
// The first address that can be returned by mmap.
#define SANITIZER_MMAP_BEGIN 0
#if SANITIZER_AIX && SANITIZER_WORDSIZE == 64
# define SANITIZER_MMAP_BEGIN 0x0a00'0000'0000'0000ULL
#else
# define SANITIZER_MMAP_BEGIN 0
#endif
// The range of addresses which can be returned my mmap.
// FIXME: this value should be different on different platforms. Larger values
@@ -482,4 +486,26 @@
# define SANITIZER_START_BACKGROUND_THREAD_IN_ASAN_INTERNAL 0
#endif
#if SANITIZER_LINUX
# if SANITIZER_GLIBC
// Workaround for
// glibc/commit/3d3572f59059e2b19b8541ea648a6172136ec42e
// Linux: Keep termios ioctl constants strictly internal
# if __GLIBC_PREREQ(2, 41)
# define SANITIZER_TERMIOS_IOCTL_CONSTANTS 0
# else
# define SANITIZER_TERMIOS_IOCTL_CONSTANTS 1
# endif
# else
# define SANITIZER_TERMIOS_IOCTL_CONSTANTS 1
# endif
#endif
#if SANITIZER_APPLE && SANITIZER_WORDSIZE == 64
// MTE uses the lower half of the top byte.
# define STRIP_MTE_TAG(addr) ((addr) & ~((uptr)0x0f << 56))
#else
# define STRIP_MTE_TAG(addr) (addr)
#endif
#endif // SANITIZER_PLATFORM_H
@@ -167,7 +167,7 @@ SANITIZER_WEAK_IMPORT void *aligned_alloc(__sanitizer::usize __alignment,
#define SANITIZER_INTERCEPT_STRLEN SI_NOT_FUCHSIA
#define SANITIZER_INTERCEPT_STRNLEN (SI_NOT_MAC && SI_NOT_FUCHSIA)
#define SANITIZER_INTERCEPT_STRCMP (SI_NOT_FUCHSIA && SI_NOT_AIX)
#define SANITIZER_INTERCEPT_STRCMP SI_NOT_FUCHSIA
#define SANITIZER_INTERCEPT_STRSTR SI_NOT_FUCHSIA
#define SANITIZER_INTERCEPT_STRCASESTR (SI_POSIX && SI_NOT_AIX)
#define SANITIZER_INTERCEPT_STRTOK SI_NOT_FUCHSIA
@@ -179,8 +179,8 @@ SANITIZER_WEAK_IMPORT void *aligned_alloc(__sanitizer::usize __alignment,
#define SANITIZER_INTERCEPT_TEXTDOMAIN SI_LINUX_NOT_ANDROID || SI_SOLARIS
#define SANITIZER_INTERCEPT_STRCASECMP SI_POSIX
#define SANITIZER_INTERCEPT_MEMSET 1
#define SANITIZER_INTERCEPT_MEMMOVE SI_NOT_AIX
#define SANITIZER_INTERCEPT_MEMCPY SI_NOT_AIX
#define SANITIZER_INTERCEPT_MEMMOVE 1
#define SANITIZER_INTERCEPT_MEMCPY 1
#define SANITIZER_INTERCEPT_MEMCMP SI_NOT_FUCHSIA
#define SANITIZER_INTERCEPT_BCMP \
SANITIZER_INTERCEPT_MEMCMP && \
@@ -551,7 +551,8 @@ SANITIZER_WEAK_IMPORT void *aligned_alloc(__sanitizer::usize __alignment,
#define SANITIZER_INTERCEPT_MALLOC_USABLE_SIZE (!SI_MAC && !SI_NETBSD)
#define SANITIZER_INTERCEPT_MCHECK_MPROBE SI_LINUX_NOT_ANDROID
#define SANITIZER_INTERCEPT_WCSLEN 1
#define SANITIZER_INTERCEPT_WCSCAT SI_POSIX
#define SANITIZER_INTERCEPT_WCSNLEN 1
#define SANITIZER_INTERCEPT_WCSCAT (SI_POSIX || SI_WINDOWS)
#define SANITIZER_INTERCEPT_WCSDUP SI_POSIX
#define SANITIZER_INTERCEPT_SIGNAL_AND_SIGACTION (!SI_WINDOWS && SI_NOT_FUCHSIA)
#define SANITIZER_INTERCEPT_BSD_SIGNAL SI_ANDROID
@@ -779,16 +779,16 @@ unsigned struct_ElfW_Phdr_sz = sizeof(Elf_Phdr);
unsigned IOCTL_SOUND_PCM_WRITE_FILTER = SOUND_PCM_WRITE_FILTER;
#endif // SOUND_VERSION
unsigned IOCTL_TCFLSH = TCFLSH;
unsigned IOCTL_TCGETA = TCGETA;
# if SANITIZER_TERMIOS_IOCTL_CONSTANTS
unsigned IOCTL_TCGETS = TCGETS;
# endif
unsigned IOCTL_TCSBRK = TCSBRK;
unsigned IOCTL_TCSBRKP = TCSBRKP;
unsigned IOCTL_TCSETA = TCSETA;
unsigned IOCTL_TCSETAF = TCSETAF;
unsigned IOCTL_TCSETAW = TCSETAW;
# if SANITIZER_TERMIOS_IOCTL_CONSTANTS
unsigned IOCTL_TCSETS = TCSETS;
unsigned IOCTL_TCSETSF = TCSETSF;
unsigned IOCTL_TCSETSW = TCSETSW;
# endif
unsigned IOCTL_TCXONC = TCXONC;
unsigned IOCTL_TIOCGLCKTRMIOS = TIOCGLCKTRMIOS;
unsigned IOCTL_TIOCGSOFTCAR = TIOCGSOFTCAR;
@@ -32,6 +32,8 @@
# elif SANITIZER_GLIBC || SANITIZER_ANDROID
# define SANITIZER_HAS_STAT64 1
# define SANITIZER_HAS_STATFS64 1
# elif SANITIZER_HAIKU
# include <stdint.h>
# endif
# if defined(__sparc__)
@@ -102,6 +104,8 @@ const unsigned struct_kernel_stat_sz = SANITIZER_ANDROID
? FIRST_32_SECOND_64(104, 128)
# if defined(_ABIN32) && _MIPS_SIM == _ABIN32
: FIRST_32_SECOND_64(176, 216);
# elif SANITIZER_MUSL
: FIRST_32_SECOND_64(160, 208);
# else
: FIRST_32_SECOND_64(160, 216);
# endif
@@ -476,6 +480,30 @@ struct __sanitizer_cmsghdr {
int cmsg_level;
int cmsg_type;
};
# elif SANITIZER_MUSL
struct __sanitizer_msghdr {
void *msg_name;
unsigned msg_namelen;
struct __sanitizer_iovec *msg_iov;
int msg_iovlen;
# if SANITIZER_WORDSIZE == 64
int __pad1;
# endif
void *msg_control;
unsigned msg_controllen;
# if SANITIZER_WORDSIZE == 64
int __pad2;
# endif
int msg_flags;
};
struct __sanitizer_cmsghdr {
unsigned cmsg_len;
# if SANITIZER_WORDSIZE == 64
int __pad1;
# endif
int cmsg_level;
int cmsg_type;
};
# else
// In POSIX, int msg_iovlen; socklen_t msg_controllen; socklen_t cmsg_len; but
// many implementations don't conform to the standard.
@@ -603,7 +631,7 @@ typedef unsigned long __sanitizer_sigset_t;
# elif SANITIZER_APPLE
typedef unsigned __sanitizer_sigset_t;
# elif SANITIZER_HAIKU
typedef unsigned long __sanitizer_sigset_t;
typedef uint64_t __sanitizer_sigset_t;
# elif SANITIZER_LINUX
struct __sanitizer_sigset_t {
// The size is determined by looking at sizeof of real sigset_t on linux.
@@ -1312,16 +1340,14 @@ extern unsigned IOCTL_SNDCTL_COPR_SENDMSG;
extern unsigned IOCTL_SNDCTL_COPR_WCODE;
extern unsigned IOCTL_SNDCTL_COPR_WDATA;
extern unsigned IOCTL_TCFLSH;
extern unsigned IOCTL_TCGETA;
extern unsigned IOCTL_TCGETS;
extern unsigned IOCTL_TCSBRK;
extern unsigned IOCTL_TCSBRKP;
extern unsigned IOCTL_TCSETA;
extern unsigned IOCTL_TCSETAF;
extern unsigned IOCTL_TCSETAW;
# if SANITIZER_TERMIOS_IOCTL_CONSTANTS
extern unsigned IOCTL_TCGETS;
extern unsigned IOCTL_TCSETS;
extern unsigned IOCTL_TCSETSF;
extern unsigned IOCTL_TCSETSW;
# endif
extern unsigned IOCTL_TCXONC;
extern unsigned IOCTL_TIOCGLCKTRMIOS;
extern unsigned IOCTL_TIOCGSOFTCAR;
+3 -12
View File
@@ -225,17 +225,9 @@ void *MapWritableFileToMemory(void *addr, uptr size, fd_t fd, OFF_T offset) {
return (void *)p;
}
static inline bool IntervalsAreSeparate(uptr start1, uptr end1,
uptr start2, uptr end2) {
CHECK(start1 <= end1);
CHECK(start2 <= end2);
return (end1 < start2) || (end2 < start1);
}
# if !SANITIZER_APPLE
// FIXME: this is thread-unsafe, but should not cause problems most of the time.
// When the shadow is mapped only a single thread usually exists (plus maybe
// several worker threads on Mac, which aren't expected to map big chunks of
// memory).
// When the shadow is mapped only a single thread usually exists
bool MemoryRangeIsAvailable(uptr range_start, uptr range_end) {
MemoryMappingLayout proc_maps(/*cache_enabled*/true);
if (proc_maps.Error())
@@ -251,7 +243,6 @@ bool MemoryRangeIsAvailable(uptr range_start, uptr range_end) {
return true;
}
#if !SANITIZER_APPLE
void DumpProcessMap() {
MemoryMappingLayout proc_maps(/*cache_enabled*/true);
const sptr kBufSize = 4095;
@@ -265,7 +256,7 @@ void DumpProcessMap() {
Report("End of process memory map.\n");
UnmapOrDie(filename, kBufSize);
}
#endif
# endif
const char *GetPwd() {
return GetEnv("PWD");
+2 -1
View File
@@ -67,7 +67,8 @@ uptr internal_ptrace(int request, int pid, void *addr, void *data);
uptr internal_waitpid(int pid, int *status, int options);
int internal_fork();
fd_t internal_spawn(const char *argv[], const char *envp[], pid_t *pid);
bool internal_spawn(const char* argv[], const char* envp[], pid_t* pid,
fd_t fd_stdin, fd_t fd_stdout);
int internal_sysctl(const int *name, unsigned int namelen, void *oldp,
uptr *oldlenp, const void *newp, uptr newlen);
@@ -47,6 +47,8 @@ typedef void (*sa_sigaction_t)(int, siginfo_t *, void *);
namespace __sanitizer {
[[maybe_unused]] static atomic_uint8_t signal_handler_is_from_sanitizer[64];
u32 GetUid() {
return getuid();
}
@@ -210,6 +212,20 @@ void UnsetAlternateSignalStack() {
UnmapOrDie(oldstack.ss_sp, oldstack.ss_size);
}
bool IsSignalHandlerFromSanitizer(int signum) {
return atomic_load(&signal_handler_is_from_sanitizer[signum],
memory_order_relaxed);
}
bool SetSignalHandlerFromSanitizer(int signum, bool new_state) {
if (signum < 0 || static_cast<unsigned>(signum) >=
ARRAY_SIZE(signal_handler_is_from_sanitizer))
return false;
return atomic_exchange(&signal_handler_is_from_sanitizer[signum], new_state,
memory_order_relaxed);
}
static void MaybeInstallSigaction(int signum,
SignalHandlerType handler) {
if (GetHandleSignalMode(signum) == kHandleSignalNo) return;
@@ -223,6 +239,9 @@ static void MaybeInstallSigaction(int signum,
if (common_flags()->use_sigaltstack) sigact.sa_flags |= SA_ONSTACK;
CHECK_EQ(0, internal_sigaction(signum, &sigact, nullptr));
VReport(1, "Installed the sigaction for signal %d\n", signum);
if (common_flags()->cloak_sanitizer_signal_handlers)
SetSignalHandlerFromSanitizer(signum, true);
}
void InstallDeadlySignalHandlers(SignalHandlerType handler) {
+103 -38
View File
@@ -20,18 +20,21 @@
#include <mach/mach.h>
// These are not available in older macOS SDKs.
#ifndef CPU_SUBTYPE_X86_64_H
#define CPU_SUBTYPE_X86_64_H ((cpu_subtype_t)8) /* Haswell */
#endif
#ifndef CPU_SUBTYPE_ARM_V7S
#define CPU_SUBTYPE_ARM_V7S ((cpu_subtype_t)11) /* Swift */
#endif
#ifndef CPU_SUBTYPE_ARM_V7K
#define CPU_SUBTYPE_ARM_V7K ((cpu_subtype_t)12)
#endif
#ifndef CPU_TYPE_ARM64
#define CPU_TYPE_ARM64 (CPU_TYPE_ARM | CPU_ARCH_ABI64)
#endif
# ifndef CPU_SUBTYPE_X86_64_H
# define CPU_SUBTYPE_X86_64_H ((cpu_subtype_t)8) /* Haswell */
# endif
# ifndef CPU_SUBTYPE_ARM_V7S
# define CPU_SUBTYPE_ARM_V7S ((cpu_subtype_t)11) /* Swift */
# endif
# ifndef CPU_SUBTYPE_ARM_V7K
# define CPU_SUBTYPE_ARM_V7K ((cpu_subtype_t)12)
# endif
# ifndef CPU_TYPE_ARM64
# define CPU_TYPE_ARM64 (CPU_TYPE_ARM | CPU_ARCH_ABI64)
# endif
# ifndef CPU_SUBTYPE_ARM64E
# define CPU_SUBTYPE_ARM64E ((cpu_subtype_t)2)
# endif
namespace __sanitizer {
@@ -42,7 +45,6 @@ struct MemoryMappedSegmentData {
const char *current_load_cmd_addr;
u32 lc_type;
uptr base_virt_addr;
uptr addr_mask;
};
template <typename Section>
@@ -51,12 +53,62 @@ static void NextSectionLoad(LoadedModule *module, MemoryMappedSegmentData *data,
const Section *sc = (const Section *)data->current_load_cmd_addr;
data->current_load_cmd_addr += sizeof(Section);
uptr sec_start = (sc->addr & data->addr_mask) + data->base_virt_addr;
uptr sec_start = sc->addr + data->base_virt_addr;
uptr sec_end = sec_start + sc->size;
module->addAddressRange(sec_start, sec_end, /*executable=*/false, isWritable,
sc->sectname);
}
static bool VerifyMemoryMapping(MemoryMappingLayout* mapping) {
InternalMmapVector<LoadedModule> modules;
modules.reserve(128); // matches DumpProcessMap
mapping->DumpListOfModules(&modules);
InternalMmapVector<LoadedModule::AddressRange> segments;
for (uptr i = 0; i < modules.size(); ++i) {
for (auto& range : modules[i].ranges()) {
if (range.beg == range.end)
continue;
segments.push_back(range);
}
}
// Verify that none of the segments overlap:
// 1. Sort the segments by the start address
// 2. Check that every segment starts after the previous one ends.
Sort(segments.data(), segments.size(),
[](LoadedModule::AddressRange& a, LoadedModule::AddressRange& b) {
return a.beg < b.beg;
});
// To avoid spam, we only print the report message once-per-process.
static bool invalid_module_map_reported = false;
bool well_formed = true;
for (size_t i = 1; i < segments.size(); i++) {
uptr cur_start = segments[i].beg;
uptr prev_end = segments[i - 1].end;
if (cur_start < prev_end) {
well_formed = false;
VReport(2, "Overlapping mappings: %s start = %p, %s end = %p\n",
segments[i].name, (void*)cur_start, segments[i - 1].name,
(void*)prev_end);
if (!invalid_module_map_reported) {
Report(
"WARN: Invalid dyld module map detected. This is most likely a bug "
"in the sanitizer.\n");
Report("WARN: Backtraces may be unreliable.\n");
invalid_module_map_reported = true;
}
}
}
for (auto& m : modules) m.clear();
mapping->Reset();
return well_formed;
}
void MemoryMappedSegment::AddAddressRanges(LoadedModule *module) {
// Don't iterate over sections when the caller hasn't set up the
// data pointer, when there are no sections, or when the segment
@@ -82,6 +134,7 @@ void MemoryMappedSegment::AddAddressRanges(LoadedModule *module) {
MemoryMappingLayout::MemoryMappingLayout(bool cache_enabled) {
Reset();
VerifyMemoryMapping(this);
}
MemoryMappingLayout::~MemoryMappingLayout() {
@@ -123,7 +176,7 @@ void MemoryMappingLayout::Reset() {
// The dyld load address should be unchanged throughout process execution,
// and it is expensive to compute once many libraries have been loaded,
// so cache it here and do not reset.
static mach_header *dyld_hdr = 0;
static const mach_header* dyld_hdr = 0;
static const char kDyldPath[] = "/usr/lib/dyld";
static const int kDyldImageIdx = -1;
@@ -187,17 +240,22 @@ typedef struct dyld_shared_cache_dylib_text_info
extern bool _dyld_get_shared_cache_uuid(uuid_t uuid);
extern const void *_dyld_get_shared_cache_range(size_t *length);
extern intptr_t _dyld_get_image_slide(const struct mach_header* mh);
extern int dyld_shared_cache_iterate_text(
const uuid_t cacheUuid,
void (^callback)(const dyld_shared_cache_dylib_text_info *info));
SANITIZER_WEAK_IMPORT const struct mach_header* _dyld_get_dyld_header(void);
} // extern "C"
static mach_header *GetDyldImageHeaderViaSharedCache() {
static const mach_header* GetDyldImageHeaderViaSharedCache() {
uuid_t uuid;
bool hasCache = _dyld_get_shared_cache_uuid(uuid);
if (!hasCache)
return nullptr;
if (&_dyld_get_dyld_header != nullptr)
return _dyld_get_dyld_header();
size_t cacheLength;
__block uptr cacheStart = (uptr)_dyld_get_shared_cache_range(&cacheLength);
CHECK(cacheStart && cacheLength);
@@ -255,23 +313,21 @@ static bool NextSegmentLoad(MemoryMappedSegment *segment,
layout_data->current_load_cmd_count--;
if (((const load_command *)lc)->cmd == kLCSegment) {
const SegmentCommand* sc = (const SegmentCommand *)lc;
uptr base_virt_addr, addr_mask;
if (layout_data->current_image == kDyldImageIdx) {
base_virt_addr = (uptr)get_dyld_hdr();
// vmaddr is masked with 0xfffff because on macOS versions < 10.12,
// it contains an absolute address rather than an offset for dyld.
// To make matters even more complicated, this absolute address
// isn't actually the absolute segment address, but the offset portion
// of the address is accurate when combined with the dyld base address,
// and the mask will give just this offset.
addr_mask = 0xfffff;
} else {
base_virt_addr =
(uptr)_dyld_get_image_vmaddr_slide(layout_data->current_image);
addr_mask = ~0;
if (internal_strcmp(sc->segname, "__LINKEDIT") == 0) {
// The LINKEDIT sections are for internal linker use, and may alias
// with the LINKEDIT section for other modules. (If we included them,
// our memory map would contain overlappping sections.)
return false;
}
segment->start = (sc->vmaddr & addr_mask) + base_virt_addr;
uptr base_virt_addr;
if (layout_data->current_image == kDyldImageIdx)
base_virt_addr = (uptr)_dyld_get_image_slide(get_dyld_hdr());
else
base_virt_addr =
(uptr)_dyld_get_image_vmaddr_slide(layout_data->current_image);
segment->start = sc->vmaddr + base_virt_addr;
segment->end = segment->start + sc->vmsize;
// Most callers don't need section information, so only fill this struct
// when required.
@@ -281,9 +337,9 @@ static bool NextSegmentLoad(MemoryMappedSegment *segment,
(const char *)lc + sizeof(SegmentCommand);
seg_data->lc_type = kLCSegment;
seg_data->base_virt_addr = base_virt_addr;
seg_data->addr_mask = addr_mask;
internal_strncpy(seg_data->name, sc->segname,
ARRAY_SIZE(seg_data->name));
seg_data->name[ARRAY_SIZE(seg_data->name) - 1] = 0;
}
// Return the initial protection.
@@ -297,6 +353,7 @@ static bool NextSegmentLoad(MemoryMappedSegment *segment,
? kDyldPath
: _dyld_get_image_name(layout_data->current_image);
internal_strncpy(segment->filename, src, segment->filename_size);
segment->filename[segment->filename_size - 1] = 0;
}
segment->arch = layout_data->current_arch;
internal_memcpy(segment->uuid, layout_data->current_uuid, kModuleUUIDSize);
@@ -311,18 +368,26 @@ ModuleArch ModuleArchFromCpuType(cpu_type_t cputype, cpu_subtype_t cpusubtype) {
case CPU_TYPE_I386:
return kModuleArchI386;
case CPU_TYPE_X86_64:
if (cpusubtype == CPU_SUBTYPE_X86_64_ALL) return kModuleArchX86_64;
if (cpusubtype == CPU_SUBTYPE_X86_64_H) return kModuleArchX86_64H;
if (cpusubtype == CPU_SUBTYPE_X86_64_ALL)
return kModuleArchX86_64;
if (cpusubtype == CPU_SUBTYPE_X86_64_H)
return kModuleArchX86_64H;
CHECK(0 && "Invalid subtype of x86_64");
return kModuleArchUnknown;
case CPU_TYPE_ARM:
if (cpusubtype == CPU_SUBTYPE_ARM_V6) return kModuleArchARMV6;
if (cpusubtype == CPU_SUBTYPE_ARM_V7) return kModuleArchARMV7;
if (cpusubtype == CPU_SUBTYPE_ARM_V7S) return kModuleArchARMV7S;
if (cpusubtype == CPU_SUBTYPE_ARM_V7K) return kModuleArchARMV7K;
if (cpusubtype == CPU_SUBTYPE_ARM_V6)
return kModuleArchARMV6;
if (cpusubtype == CPU_SUBTYPE_ARM_V7)
return kModuleArchARMV7;
if (cpusubtype == CPU_SUBTYPE_ARM_V7S)
return kModuleArchARMV7S;
if (cpusubtype == CPU_SUBTYPE_ARM_V7K)
return kModuleArchARMV7K;
CHECK(0 && "Invalid subtype of ARM");
return kModuleArchUnknown;
case CPU_TYPE_ARM64:
if (cpusubtype == CPU_SUBTYPE_ARM64E)
return kModuleArchARM64E;
return kModuleArchARM64;
default:
CHECK(0 && "Invalid CPU type");
+1 -1
View File
@@ -15,7 +15,7 @@
# define SANITIZER_REDEFINE_BUILTINS_H
// The asm hack only works with GCC and Clang.
# if !defined(_WIN32) && !defined(_AIX)
# if !defined(_WIN32) && !defined(_AIX) && !defined(__APPLE__)
asm(R"(
.set memcpy, __sanitizer_internal_memcpy
@@ -45,6 +45,8 @@ using namespace __sanitizer;
INTERCEPTOR(uptr, bsd_signal, int signum, uptr handler) {
SIGNAL_INTERCEPTOR_ENTER();
if (GetHandleSignalMode(signum) == kHandleSignalExclusive) return 0;
// TODO: support cloak_sanitizer_signal_handlers
SIGNAL_INTERCEPTOR_SIGNAL_IMPL(bsd_signal, signum, handler);
}
#define INIT_BSD_SIGNAL COMMON_INTERCEPT_FUNCTION(bsd_signal)
@@ -56,19 +58,55 @@ INTERCEPTOR(uptr, bsd_signal, int signum, uptr handler) {
INTERCEPTOR(uptr, signal, int signum, uptr handler) {
SIGNAL_INTERCEPTOR_ENTER();
if (GetHandleSignalMode(signum) == kHandleSignalExclusive)
// The user can neither view nor change the signal handler, regardless of
// the cloak_sanitizer_signal_handlers setting. This differs from
// sigaction().
return (uptr) nullptr;
SIGNAL_INTERCEPTOR_SIGNAL_IMPL(signal, signum, handler);
uptr ret = +[](auto signal, int signum, uptr handler) {
SIGNAL_INTERCEPTOR_SIGNAL_IMPL(signal, signum, handler);
}(signal, signum, handler);
if (ret != sig_err && SetSignalHandlerFromSanitizer(signum, false))
// If the user sets a signal handler, it becomes uncloaked, even if they
// reuse a sanitizer's signal handler.
ret = sig_dfl;
return ret;
}
#define INIT_SIGNAL COMMON_INTERCEPT_FUNCTION(signal)
INTERCEPTOR(int, sigaction_symname, int signum,
const __sanitizer_sigaction *act, __sanitizer_sigaction *oldact) {
SIGNAL_INTERCEPTOR_ENTER();
if (GetHandleSignalMode(signum) == kHandleSignalExclusive) {
if (!oldact) return 0;
act = nullptr;
// If cloak_sanitizer_signal_handlers=true, the user can neither view nor
// change the signal handle.
// If false, the user can view but not change the signal handler. This
// differs from signal().
}
SIGNAL_INTERCEPTOR_SIGACTION_IMPL(signum, act, oldact);
int ret = +[](int signum, const __sanitizer_sigaction* act,
__sanitizer_sigaction* oldact) {
SIGNAL_INTERCEPTOR_SIGACTION_IMPL(signum, act, oldact);
}(signum, act, oldact);
if (act) {
if (ret == 0 && SetSignalHandlerFromSanitizer(signum, false)) {
// If the user sets a signal handler, it becomes uncloaked, even if they
// reuse a sanitizer's signal handler.
if (oldact)
oldact->handler = reinterpret_cast<__sanitizer_sighandler_ptr>(sig_dfl);
}
} else if (ret == 0 && oldact && IsSignalHandlerFromSanitizer(signum)) {
oldact->handler = reinterpret_cast<__sanitizer_sighandler_ptr>(sig_dfl);
}
return ret;
}
#define INIT_SIGACTION COMMON_INTERCEPT_FUNCTION(sigaction_symname)
+1 -1
View File
@@ -38,7 +38,7 @@ class SuspendedThreadsList {
}
virtual uptr ThreadCount() const { UNIMPLEMENTED(); }
virtual tid_t GetThreadID(uptr index) const { UNIMPLEMENTED(); }
virtual ThreadID GetThreadID(uptr index) const { UNIMPLEMENTED(); }
protected:
~SuspendedThreadsList() {}
@@ -94,17 +94,17 @@ class SuspendedThreadsListLinux final : public SuspendedThreadsList {
public:
SuspendedThreadsListLinux() { thread_ids_.reserve(1024); }
tid_t GetThreadID(uptr index) const override;
ThreadID GetThreadID(uptr index) const override;
uptr ThreadCount() const override;
bool ContainsTid(tid_t thread_id) const;
void Append(tid_t tid);
bool ContainsTid(ThreadID thread_id) const;
void Append(ThreadID tid);
PtraceRegistersStatus GetRegistersAndSP(uptr index,
InternalMmapVector<uptr> *buffer,
uptr *sp) const override;
private:
InternalMmapVector<tid_t> thread_ids_;
InternalMmapVector<ThreadID> thread_ids_;
};
// Structure for passing arguments into the tracer thread.
@@ -137,10 +137,10 @@ class ThreadSuspender {
private:
SuspendedThreadsListLinux suspended_threads_list_;
pid_t pid_;
bool SuspendThread(tid_t thread_id);
bool SuspendThread(ThreadID thread_id);
};
bool ThreadSuspender::SuspendThread(tid_t tid) {
bool ThreadSuspender::SuspendThread(ThreadID tid) {
int pterrno;
if (internal_iserror(internal_ptrace(PTRACE_ATTACH, tid, nullptr, nullptr),
&pterrno)) {
@@ -210,7 +210,7 @@ void ThreadSuspender::KillAllThreads() {
bool ThreadSuspender::SuspendAllThreads() {
ThreadLister thread_lister(pid_);
bool retry = true;
InternalMmapVector<tid_t> threads;
InternalMmapVector<ThreadID> threads;
threads.reserve(128);
for (int i = 0; i < 30 && retry; ++i) {
retry = false;
@@ -226,7 +226,7 @@ bool ThreadSuspender::SuspendAllThreads() {
case ThreadLister::Ok:
break;
}
for (tid_t tid : threads) {
for (ThreadID tid : threads) {
// Are we already attached to this thread?
// Currently this check takes linear time, however the number of threads
// is usually small.
@@ -403,7 +403,77 @@ struct ScopedSetTracerPID {
}
};
// This detects whether ptrace is blocked (e.g., by seccomp), by forking and
// then attempting ptrace.
// This separate check is necessary because StopTheWorld() creates a thread
// with a shared virtual address space and shared TLS, and therefore
// cannot use waitpid() due to the shared errno.
static void TestPTrace() {
# if SANITIZER_SPARC
// internal_fork() on SPARC actually calls __fork(). We can't safely fork,
// because it's possible seccomp has been configured to disallow fork() but
// allow clone().
VReport(1, "WARNING: skipping TestPTrace() because this is SPARC\n");
VReport(1,
"If seccomp blocks ptrace, LeakSanitizer may hang without further "
"notice\n");
VReport(
1,
"If seccomp does not block ptrace, you can safely ignore this warning\n");
# else
// Heuristic: only check the first time this is called. This is not always
// correct (e.g., user manually triggers leak detection, then updates
// seccomp, then leak detection is triggered again).
static bool checked = false;
if (checked)
return;
checked = true;
// Hopefully internal_fork() is not too expensive, thanks to copy-on-write.
// Besides, this is only called the first time.
// Note that internal_fork() on non-SPARC Linux actually calls
// SYSCALL(clone); thus, it is reasonable to use it because if seccomp kills
// TestPTrace(), it would have killed StopTheWorld() anyway.
int pid = internal_fork();
if (pid < 0) {
int rverrno;
if (internal_iserror(pid, &rverrno))
VReport(0, "WARNING: TestPTrace() failed to fork (errno %d)\n", rverrno);
// We don't abort the sanitizer - it's still worth letting the sanitizer
// try.
return;
}
if (pid == 0) {
// Child subprocess
// TODO: consider checking return value of internal_ptrace, to handle
// SCMP_ACT_ERRNO. However, be careful not to consume too many
// resources performing a proper ptrace.
internal_ptrace(PTRACE_ATTACH, 0, nullptr, nullptr);
internal__exit(0);
} else {
int wstatus;
internal_waitpid(pid, &wstatus, 0);
// Handle SCMP_ACT_KILL
if (WIFSIGNALED(wstatus)) {
VReport(0,
"WARNING: ptrace appears to be blocked (is seccomp enabled?). "
"LeakSanitizer may hang.\n");
VReport(0, "Child exited with signal %d.\n", WTERMSIG(wstatus));
// We don't abort the sanitizer - it's still worth letting the sanitizer
// try.
}
}
# endif
}
void StopTheWorld(StopTheWorldCallback callback, void *argument) {
TestPTrace();
StopTheWorldScope in_stoptheworld;
// Prepare the arguments for TracerThread.
struct TracerThreadArgument tracer_thread_argument;
@@ -457,7 +527,8 @@ void StopTheWorld(StopTheWorldCallback callback, void *argument) {
internal_prctl(PR_SET_PTRACER, tracer_pid, 0, 0, 0);
// Allow the tracer thread to start.
tracer_thread_argument.mutex.Unlock();
// NOTE: errno is shared between this thread and the tracer thread.
// NOTE: errno is shared between this thread and the tracer thread
// (clone was called without CLONE_SETTLS / newtls).
// internal_waitpid() may call syscall() which can access/spoil errno,
// so we can't call it now. Instead we for the tracer thread to finish using
// the spin loop below. Man page for sched_yield() says "In the Linux
@@ -546,7 +617,7 @@ static constexpr uptr kExtraRegs[] = {0};
#error "Unsupported architecture"
#endif // SANITIZER_ANDROID && defined(__arm__)
tid_t SuspendedThreadsListLinux::GetThreadID(uptr index) const {
ThreadID SuspendedThreadsListLinux::GetThreadID(uptr index) const {
CHECK_LT(index, thread_ids_.size());
return thread_ids_[index];
}
@@ -555,14 +626,14 @@ uptr SuspendedThreadsListLinux::ThreadCount() const {
return thread_ids_.size();
}
bool SuspendedThreadsListLinux::ContainsTid(tid_t thread_id) const {
bool SuspendedThreadsListLinux::ContainsTid(ThreadID thread_id) const {
for (uptr i = 0; i < thread_ids_.size(); i++) {
if (thread_ids_[i] == thread_id) return true;
}
return false;
}
void SuspendedThreadsListLinux::Append(tid_t tid) {
void SuspendedThreadsListLinux::Append(ThreadID tid) {
thread_ids_.push_back(tid);
}
@@ -23,7 +23,7 @@
namespace __sanitizer {
typedef struct {
tid_t tid;
ThreadID tid;
thread_t thread;
} SuspendedThreadInfo;
@@ -31,7 +31,7 @@ class SuspendedThreadsListMac final : public SuspendedThreadsList {
public:
SuspendedThreadsListMac() = default;
tid_t GetThreadID(uptr index) const override;
ThreadID GetThreadID(uptr index) const override;
thread_t GetThread(uptr index) const;
uptr ThreadCount() const override;
bool ContainsThread(thread_t thread) const;
@@ -111,7 +111,7 @@ typedef x86_thread_state32_t regs_struct;
#error "Unsupported architecture"
#endif
tid_t SuspendedThreadsListMac::GetThreadID(uptr index) const {
ThreadID SuspendedThreadsListMac::GetThreadID(uptr index) const {
CHECK_LT(index, threads_.size());
return threads_[index].tid;
}
@@ -52,17 +52,17 @@ class SuspendedThreadsListNetBSD final : public SuspendedThreadsList {
public:
SuspendedThreadsListNetBSD() { thread_ids_.reserve(1024); }
tid_t GetThreadID(uptr index) const;
ThreadID GetThreadID(uptr index) const;
uptr ThreadCount() const;
bool ContainsTid(tid_t thread_id) const;
void Append(tid_t tid);
bool ContainsTid(ThreadID thread_id) const;
void Append(ThreadID tid);
PtraceRegistersStatus GetRegistersAndSP(uptr index,
InternalMmapVector<uptr> *buffer,
uptr *sp) const;
private:
InternalMmapVector<tid_t> thread_ids_;
InternalMmapVector<ThreadID> thread_ids_;
};
struct TracerThreadArgument {
@@ -313,7 +313,7 @@ void StopTheWorld(StopTheWorldCallback callback, void *argument) {
}
}
tid_t SuspendedThreadsListNetBSD::GetThreadID(uptr index) const {
ThreadID SuspendedThreadsListNetBSD::GetThreadID(uptr index) const {
CHECK_LT(index, thread_ids_.size());
return thread_ids_[index];
}
@@ -322,7 +322,7 @@ uptr SuspendedThreadsListNetBSD::ThreadCount() const {
return thread_ids_.size();
}
bool SuspendedThreadsListNetBSD::ContainsTid(tid_t thread_id) const {
bool SuspendedThreadsListNetBSD::ContainsTid(ThreadID thread_id) const {
for (uptr i = 0; i < thread_ids_.size(); i++) {
if (thread_ids_[i] == thread_id)
return true;
@@ -330,7 +330,7 @@ bool SuspendedThreadsListNetBSD::ContainsTid(tid_t thread_id) const {
return false;
}
void SuspendedThreadsListNetBSD::Append(tid_t tid) {
void SuspendedThreadsListNetBSD::Append(ThreadID tid) {
thread_ids_.push_back(tid);
}
@@ -38,7 +38,7 @@ struct SuspendedThreadsListWindows final : public SuspendedThreadsList {
InternalMmapVector<uptr> *buffer,
uptr *sp) const override;
tid_t GetThreadID(uptr index) const override;
ThreadID GetThreadID(uptr index) const override;
uptr ThreadCount() const override;
};
@@ -68,7 +68,7 @@ PtraceRegistersStatus SuspendedThreadsListWindows::GetRegistersAndSP(
return REGISTERS_AVAILABLE;
}
tid_t SuspendedThreadsListWindows::GetThreadID(uptr index) const {
ThreadID SuspendedThreadsListWindows::GetThreadID(uptr index) const {
CHECK_LT(index, threadIds.size());
return threadIds[index];
}
@@ -83,7 +83,7 @@ class SymbolizerProcess {
const char *SendCommand(const char *command);
protected:
~SymbolizerProcess() {}
~SymbolizerProcess();
/// The maximum number of arguments required to invoke a tool process.
static const unsigned kArgVMax = 16;
@@ -114,6 +114,10 @@ class SymbolizerProcess {
fd_t input_fd_;
fd_t output_fd_;
// We hold on to the child's stdin fd (the read end of the pipe)
// so that when we write to it, we don't get a SIGPIPE
fd_t child_stdin_fd_;
InternalMmapVector<char> buffer_;
static const uptr kMaxTimesRestarted = 5;
@@ -476,10 +476,11 @@ const char *LLVMSymbolizer::FormatAndSendCommand(const char *command_prefix,
return symbolizer_process_->SendCommand(buffer_);
}
SymbolizerProcess::SymbolizerProcess(const char *path, bool use_posix_spawn)
SymbolizerProcess::SymbolizerProcess(const char* path, bool use_posix_spawn)
: path_(path),
input_fd_(kInvalidFd),
output_fd_(kInvalidFd),
child_stdin_fd_(kInvalidFd),
times_restarted_(0),
failed_to_start_(false),
reported_invalid_path_(false),
@@ -488,6 +489,11 @@ SymbolizerProcess::SymbolizerProcess(const char *path, bool use_posix_spawn)
CHECK_NE(path_[0], '\0');
}
SymbolizerProcess::~SymbolizerProcess() {
if (child_stdin_fd_ != kInvalidFd)
CloseFile(child_stdin_fd_);
}
static bool IsSameModule(const char *path) {
if (const char *ProcessName = GetProcessName()) {
if (const char *SymbolizerName = StripModuleName(path)) {
@@ -533,6 +539,10 @@ bool SymbolizerProcess::Restart() {
CloseFile(input_fd_);
if (output_fd_ != kInvalidFd)
CloseFile(output_fd_);
if (child_stdin_fd_ != kInvalidFd) {
CloseFile(child_stdin_fd_);
child_stdin_fd_ = kInvalidFd; // Don't free in destructor
}
return StartSymbolizerSubprocess();
}
+85 -28
View File
@@ -78,13 +78,25 @@ class AtosSymbolizerProcess final : public SymbolizerProcess {
}
bool ReachedEndOfOutput(const char *buffer, uptr length) const override {
return (length >= 1 && buffer[length - 1] == '\n');
if (common_flags()->symbolize_inline_frames) {
// When running with -i, atos sends two newlines at the end of each
// address it symbolizes. This indicates the end of the set of frames
// for a particular address.
return length >= 2 && buffer[length - 1] == '\n' &&
buffer[length - 2] == '\n';
} else {
// When running without -i, atos only sends a single newline at
// the end of each address it symbolizes.
return length >= 1 && buffer[length - 1] == '\n';
}
}
void GetArgV(const char *path_to_binary,
const char *(&argv)[kArgVMax]) const override {
int i = 0;
argv[i++] = path_to_binary;
if (common_flags()->symbolize_inline_frames)
argv[i++] = "-i";
argv[i++] = "-p";
argv[i++] = &pid_str_[0];
if (GetMacosAlignedVersion() == MacosVersion(10, 9)) {
@@ -102,12 +114,16 @@ class AtosSymbolizerProcess final : public SymbolizerProcess {
#undef K_ATOS_ENV_VAR
static bool ParseCommandOutput(const char *str, uptr addr, char **out_name,
char **out_module, char **out_file, uptr *line,
uptr *start_address) {
// Parses a single frame (one line) from str, and returns the pointer to the
// next character to parse (i.e. after the newline) if successful. If
// it fails, returns NULL.
static const char* ParseCommandOutput(const char* str, uptr addr,
char** out_name, char** out_module,
char** out_file, uptr* line,
uptr* start_address) {
// Trim ending newlines.
char *trim;
ExtractTokenUpToDelimiter(str, "\n", &trim);
str = ExtractTokenUpToDelimiter(str, "\n", &trim);
// The line from `atos` is in one of these formats:
// myfunction (in library.dylib) (sourcefile.c:17)
@@ -124,7 +140,7 @@ static bool ParseCommandOutput(const char *str, uptr addr, char **out_name,
if (rest[0] == '\0') {
InternalFree(symbol_name);
InternalFree(trim);
return false;
return NULL;
}
if (internal_strncmp(symbol_name, "0x", 2) != 0)
@@ -149,7 +165,7 @@ static bool ParseCommandOutput(const char *str, uptr addr, char **out_name,
}
InternalFree(trim);
return true;
return str;
}
AtosSymbolizer::AtosSymbolizer(const char *path, LowLevelAllocator *allocator)
@@ -161,31 +177,72 @@ bool AtosSymbolizer::SymbolizePC(uptr addr, SymbolizedStack *stack) {
char command[32];
internal_snprintf(command, sizeof(command), "0x%zx\n", addr);
const char *buf = process_->SendCommand(command);
if (!buf) return false;
uptr line;
uptr start_address = AddressInfo::kUnknown;
if (!ParseCommandOutput(buf, addr, &stack->info.function, &stack->info.module,
&stack->info.file, &line, &start_address)) {
Report("WARNING: atos failed to symbolize address \"0x%zx\"\n", addr);
if (!buf)
return false;
}
stack->info.line = (int)line;
if (start_address == AddressInfo::kUnknown) {
// Fallback to dladdr() to get function start address if atos doesn't report
// it.
Dl_info info;
int result = dladdr((const void *)addr, &info);
if (result)
start_address = reinterpret_cast<uptr>(info.dli_saddr);
SymbolizedStack* last = stack;
bool top_frame = true;
// Parse one line of input (i.e. one frame).
//
// When symbolize_inline_frames=true, an empty line
// (i.e. \n at the beginning of a line) indicates that the last
// frame has been sent.
//
// When symbolize_inline_frames=false, the symbolizer will send only
// one frame (without a empty line), so loop runs exactly once
// and hits an early `break`.
while (*buf != '\n') {
uptr line;
uptr start_address = AddressInfo::kUnknown;
SymbolizedStack* cur;
if (top_frame) {
cur = stack;
} else {
cur = SymbolizedStack::New(stack->info.address);
cur->info.FillModuleInfo(stack->info.module, stack->info.module_offset,
stack->info.module_arch);
last->next = cur;
last = cur;
}
// Parse one line of input (i.e. one frame)
// If this succeeds, buf will be updated to point to the first character
// after the newline.
buf = ParseCommandOutput(buf, addr, &cur->info.function, &cur->info.module,
&cur->info.file, &line, &start_address);
// Upon failure, ParseCommandOutput returns NULL.
if (!buf) {
Report("WARNING: atos failed to symbolize address \"0x%zx\"\n", addr);
return false;
}
cur->info.line = (int)line;
if (top_frame && start_address == AddressInfo::kUnknown) {
// Fallback to dladdr() to get function start address if atos doesn't
// report it.
Dl_info info;
int result = dladdr((const void*)addr, &info);
if (result)
start_address = reinterpret_cast<uptr>(info.dli_saddr);
}
// Only assign to `function_offset` if we were able to get the function's
// start address and we got a sensible `start_address` (dladdr doesn't
// always ensure that `addr >= sym_addr`).
if (start_address != AddressInfo::kUnknown && addr >= start_address) {
cur->info.function_offset = addr - start_address;
}
// atos only sends one line when inline frames are off
if (!common_flags()->symbolize_inline_frames)
break;
top_frame = false;
}
// Only assign to `function_offset` if we were able to get the function's
// start address and we got a sensible `start_address` (dladdr doesn't always
// ensure that `addr >= sym_addr`).
if (start_address != AddressInfo::kUnknown && addr >= start_address) {
stack->info.function_offset = addr - start_address;
}
return true;
}
@@ -156,30 +156,34 @@ bool SymbolizerProcess::StartSymbolizerSubprocess() {
Printf("\n");
}
fd_t infd[2] = {}, outfd[2] = {};
if (!CreateTwoHighNumberedPipes(infd, outfd)) {
Report(
"WARNING: Can't create a socket pair to start "
"external symbolizer (errno: %d)\n",
errno);
return false;
}
if (use_posix_spawn_) {
# if SANITIZER_APPLE
fd_t fd = internal_spawn(argv, const_cast<const char **>(GetEnvP()), &pid);
if (fd == kInvalidFd) {
bool success = internal_spawn(argv, const_cast<const char**>(GetEnvP()),
&pid, outfd[0], infd[1]);
if (!success) {
Report("WARNING: failed to spawn external symbolizer (errno: %d)\n",
errno);
internal_close(infd[0]);
internal_close(outfd[1]);
return false;
}
input_fd_ = fd;
output_fd_ = fd;
// We intentionally hold on to the read-end so that we don't get a SIGPIPE
child_stdin_fd_ = outfd[0];
# else // SANITIZER_APPLE
UNIMPLEMENTED();
# endif // SANITIZER_APPLE
} else {
fd_t infd[2] = {}, outfd[2] = {};
if (!CreateTwoHighNumberedPipes(infd, outfd)) {
Report(
"WARNING: Can't create a socket pair to start "
"external symbolizer (errno: %d)\n",
errno);
return false;
}
pid = StartSubprocess(path_, argv, GetEnvP(), /* stdin */ outfd[0],
/* stdout */ infd[1]);
if (pid < 0) {
@@ -187,11 +191,11 @@ bool SymbolizerProcess::StartSymbolizerSubprocess() {
internal_close(outfd[1]);
return false;
}
input_fd_ = infd[0];
output_fd_ = outfd[1];
}
input_fd_ = infd[0];
output_fd_ = outfd[1];
CHECK_GT(pid, 0);
// Check that symbolizer subprocess started successfully.
@@ -505,6 +509,13 @@ static void ChooseSymbolizerTools(IntrusiveList<SymbolizerTool> *list,
}
# if SANITIZER_APPLE
if (list->empty()) {
Report(
"WARN: No external symbolizers found. Symbols may be missing or "
"unreliable.\n");
Report(
"HINT: Is PATH set? Does sandbox allow file-read of /usr/bin/atos?\n");
}
VReport(2, "Using dladdr symbolizer.\n");
list->push_back(new (*allocator) DlAddrSymbolizer());
# endif // SANITIZER_APPLE
+5 -4
View File
@@ -80,7 +80,7 @@ void ThreadContextBase::SetFinished() {
OnFinished();
}
void ThreadContextBase::SetStarted(tid_t _os_id, ThreadType _thread_type,
void ThreadContextBase::SetStarted(ThreadID _os_id, ThreadType _thread_type,
void *arg) {
status = ThreadStatusRunning;
os_id = _os_id;
@@ -228,7 +228,8 @@ static bool FindThreadContextByOsIdCallback(ThreadContextBase *tctx,
tctx->status != ThreadStatusDead);
}
ThreadContextBase *ThreadRegistry::FindThreadContextByOsIDLocked(tid_t os_id) {
ThreadContextBase *ThreadRegistry::FindThreadContextByOsIDLocked(
ThreadID os_id) {
return FindThreadContextLocked(FindThreadContextByOsIdCallback,
(void *)os_id);
}
@@ -322,8 +323,8 @@ ThreadStatus ThreadRegistry::FinishThread(u32 tid) {
return prev_status;
}
void ThreadRegistry::StartThread(u32 tid, tid_t os_id, ThreadType thread_type,
void *arg) {
void ThreadRegistry::StartThread(u32 tid, ThreadID os_id,
ThreadType thread_type, void *arg) {
ThreadRegistryLock l(this);
running_threads_++;
ThreadContextBase *tctx = threads_[tid];
+4 -4
View File
@@ -43,7 +43,7 @@ class ThreadContextBase {
const u32 tid; // Thread ID. Main thread should have tid = 0.
u64 unique_id; // Unique thread ID.
u32 reuse_count; // Number of times this tid was reused.
tid_t os_id; // PID (used for reporting).
ThreadID os_id; // PID (used for reporting).
uptr user_id; // Some opaque user thread id (e.g. pthread_t).
char name[64]; // As annotated by user.
@@ -62,7 +62,7 @@ class ThreadContextBase {
void SetDead();
void SetJoined(void *arg);
void SetFinished();
void SetStarted(tid_t _os_id, ThreadType _thread_type, void *arg);
void SetStarted(ThreadID _os_id, ThreadType _thread_type, void *arg);
void SetCreated(uptr _user_id, u64 _unique_id, bool _detached,
u32 _parent_tid, u32 _stack_tid, void *arg);
void Reset();
@@ -126,7 +126,7 @@ class SANITIZER_MUTEX ThreadRegistry {
// is found.
ThreadContextBase *FindThreadContextLocked(FindThreadCallback cb,
void *arg);
ThreadContextBase *FindThreadContextByOsIDLocked(tid_t os_id);
ThreadContextBase *FindThreadContextByOsIDLocked(ThreadID os_id);
void SetThreadName(u32 tid, const char *name);
void SetThreadNameByUserId(uptr user_id, const char *name);
@@ -134,7 +134,7 @@ class SANITIZER_MUTEX ThreadRegistry {
void JoinThread(u32 tid, void *arg);
// Finishes thread and returns previous status.
ThreadStatus FinishThread(u32 tid);
void StartThread(u32 tid, tid_t os_id, ThreadType thread_type, void *arg);
void StartThread(u32 tid, ThreadID os_id, ThreadType thread_type, void *arg);
u32 ConsumeThreadUserId(uptr user_id);
void SetThreadUserId(u32 tid, uptr user_id);
+1 -3
View File
@@ -108,9 +108,7 @@ int internal_dlinfo(void *handle, int request, void *p) {
// In contrast to POSIX, on Windows GetCurrentThreadId()
// returns a system-unique identifier.
tid_t GetTid() {
return GetCurrentThreadId();
}
ThreadID GetTid() { return GetCurrentThreadId(); }
uptr GetThreadSelf() {
return GetTid();
+2 -2
View File
@@ -165,7 +165,7 @@ int __tsan_get_report_mutex(void *report, uptr idx, uptr *mutex_id, void **addr,
}
SANITIZER_INTERFACE_ATTRIBUTE
int __tsan_get_report_thread(void *report, uptr idx, int *tid, tid_t *os_id,
int __tsan_get_report_thread(void *report, uptr idx, int *tid, ThreadID *os_id,
int *running, const char **name, int *parent_tid,
void **trace, uptr trace_size) {
const ReportDesc *rep = (ReportDesc *)report;
@@ -242,7 +242,7 @@ const char *__tsan_locate_address(uptr addr, char *name, uptr name_size,
SANITIZER_INTERFACE_ATTRIBUTE
int __tsan_get_alloc_stack(uptr addr, uptr *trace, uptr size, int *thread_id,
tid_t *os_id) {
ThreadID *os_id) {
MBlock *b = 0;
Allocator *a = allocator();
if (a->PointerIsMine((void *)addr)) {
+37
View File
@@ -20,6 +20,43 @@
#include "tsan_rtl.h"
#include "ubsan/ubsan_flags.h"
#if SANITIZER_APPLE && !SANITIZER_GO
namespace __sanitizer {
template <>
inline bool FlagHandler<LockDuringWriteSetting>::Parse(const char *value) {
if (internal_strcmp(value, "on") == 0) {
*t_ = kLockDuringAllWrites;
return true;
}
if (internal_strcmp(value, "disable_for_current_process") == 0) {
*t_ = kNoLockDuringWritesCurrentProcess;
return true;
}
if (internal_strcmp(value, "disable_for_all_processes") == 0) {
*t_ = kNoLockDuringWritesAllProcesses;
return true;
}
Printf("ERROR: Invalid value for signal handler option: '%s'\n", value);
return false;
}
template <>
inline bool FlagHandler<LockDuringWriteSetting>::Format(char *buffer,
uptr size) {
switch (*t_) {
case kLockDuringAllWrites:
return FormatString(buffer, size, "on");
case kNoLockDuringWritesCurrentProcess:
return FormatString(buffer, size, "disable_for_current_process");
case kNoLockDuringWritesAllProcesses:
return FormatString(buffer, size, "disable_for_all_processes");
}
}
} // namespace __sanitizer
#endif // SANITIZER_APPLE && !SANITIZER_GO
namespace __tsan {
// Can be overriden in frontend.
+8
View File
@@ -16,6 +16,14 @@
#include "sanitizer_common/sanitizer_flags.h"
#include "sanitizer_common/sanitizer_deadlock_detector_interface.h"
#if SANITIZER_APPLE && !SANITIZER_GO
enum LockDuringWriteSetting {
kLockDuringAllWrites,
kNoLockDuringWritesCurrentProcess,
kNoLockDuringWritesAllProcesses,
};
#endif
namespace __tsan {
struct Flags : DDFlags {
+12
View File
@@ -80,3 +80,15 @@ TSAN_FLAG(bool, shared_ptr_interceptor, true,
TSAN_FLAG(bool, print_full_thread_history, false,
"If set, prints thread creation stacks for the threads involved in "
"the report and their ancestors up to the main thread.")
#if SANITIZER_APPLE && !SANITIZER_GO
TSAN_FLAG(LockDuringWriteSetting, lock_during_write, kLockDuringAllWrites,
"Determines whether to obtain a lock while writing logs or error "
"reports. "
"\"on\" - [default] lock during all writes. "
"\"disable_for_current_process\" - don't lock during all writes in "
"the current process, but do lock for all writes in child "
"processes."
"\"disable_for_all_processes\" - don't lock during all writes in "
"the current process and it's children processes.")
#endif
+9 -1
View File
@@ -1,6 +1,9 @@
#ifndef TSAN_INTERCEPTORS_H
#define TSAN_INTERCEPTORS_H
#if SANITIZER_APPLE && !SANITIZER_GO
# include "sanitizer_common/sanitizer_mac.h"
#endif
#include "sanitizer_common/sanitizer_stacktrace.h"
#include "tsan_rtl.h"
@@ -43,7 +46,12 @@ inline bool in_symbolizer() {
#endif
inline bool MustIgnoreInterceptor(ThreadState *thr) {
return !thr->is_inited || thr->ignore_interceptors || thr->in_ignored_lib;
return !thr->is_inited || thr->ignore_interceptors || thr->in_ignored_lib
#if SANITIZER_APPLE && !SANITIZER_GO
|| (flags()->lock_during_write != kLockDuringAllWrites &&
thr->in_internal_write_call)
#endif
;
}
} // namespace __tsan
+19
View File
@@ -281,6 +281,25 @@ TSAN_INTERCEPTOR(void, os_unfair_lock_lock, os_unfair_lock_t lock) {
Acquire(thr, pc, (uptr)lock);
}
// os_unfair_lock_lock_with_flags was introduced in macOS 15
# if defined(__MAC_15_0) || defined(__IPHONE_18_0) || defined(__TVOS_18_0) || \
defined(__VISIONOS_2_0) || defined(__WATCHOS_11_0)
# pragma clang diagnostic push
# pragma clang diagnostic ignored "-Wunguarded-availability-new"
// We're just intercepting this - if it doesn't exist on the platform, then the
// process shouldn't have called it in the first place.
TSAN_INTERCEPTOR(void, os_unfair_lock_lock_with_flags, os_unfair_lock_t lock,
os_unfair_lock_flags_t flags) {
if (!cur_thread()->is_inited || cur_thread()->is_dead) {
return REAL(os_unfair_lock_lock_with_flags)(lock, flags);
}
SCOPED_TSAN_INTERCEPTOR(os_unfair_lock_lock_with_flags, lock, flags);
REAL(os_unfair_lock_lock_with_flags)(lock, flags);
Acquire(thr, pc, (uptr)lock);
}
# pragma clang diagnostic pop
# endif
TSAN_INTERCEPTOR(void, os_unfair_lock_lock_with_options, os_unfair_lock_t lock,
u32 options) {
if (!cur_thread()->is_inited || cur_thread()->is_dead) {
+78 -32
View File
@@ -22,6 +22,7 @@
#include "sanitizer_common/sanitizer_internal_defs.h"
#include "sanitizer_common/sanitizer_libc.h"
#include "sanitizer_common/sanitizer_linux.h"
#include "sanitizer_common/sanitizer_placement_new.h"
#include "sanitizer_common/sanitizer_platform_interceptors.h"
#include "sanitizer_common/sanitizer_platform_limits_netbsd.h"
#include "sanitizer_common/sanitizer_platform_limits_posix.h"
@@ -30,6 +31,9 @@
#include "sanitizer_common/sanitizer_tls_get_addr.h"
#include "sanitizer_common/sanitizer_vector.h"
#include "tsan_fd.h"
#if SANITIZER_APPLE && !SANITIZER_GO
# include "tsan_flags.h"
#endif
#include "tsan_interceptors.h"
#include "tsan_interface.h"
#include "tsan_mman.h"
@@ -78,17 +82,6 @@ struct ucontext_t {
};
#endif
#if defined(__x86_64__) || defined(__mips__) || SANITIZER_PPC64V1 || \
defined(__s390x__)
#define PTHREAD_ABI_BASE "GLIBC_2.3.2"
#elif defined(__aarch64__) || SANITIZER_PPC64V2
#define PTHREAD_ABI_BASE "GLIBC_2.17"
#elif SANITIZER_LOONGARCH64
#define PTHREAD_ABI_BASE "GLIBC_2.36"
#elif SANITIZER_RISCV64
# define PTHREAD_ABI_BASE "GLIBC_2.27"
#endif
extern "C" int pthread_attr_init(void *attr);
extern "C" int pthread_attr_destroy(void *attr);
DECLARE_REAL(int, pthread_attr_getdetachstate, void *, void *)
@@ -340,11 +333,6 @@ void ScopedInterceptor::DisableIgnoresImpl() {
}
#define TSAN_INTERCEPT(func) INTERCEPT_FUNCTION(func)
#if SANITIZER_FREEBSD || SANITIZER_NETBSD
# define TSAN_INTERCEPT_VER(func, ver) INTERCEPT_FUNCTION(func)
#else
# define TSAN_INTERCEPT_VER(func, ver) INTERCEPT_FUNCTION_VER(func, ver)
#endif
#if SANITIZER_FREEBSD
# define TSAN_MAYBE_INTERCEPT_FREEBSD_ALIAS(func) \
INTERCEPT_FUNCTION(_pthread_##func)
@@ -1145,6 +1133,22 @@ TSAN_INTERCEPTOR(int, pthread_create,
TSAN_INTERCEPTOR(int, pthread_join, void *th, void **ret) {
SCOPED_INTERCEPTOR_RAW(pthread_join, th, ret);
#if SANITIZER_ANDROID
{
// In Bionic, if the target thread has already exited when pthread_detach is
// called, pthread_detach will call pthread_join internally to clean it up.
// In that case, the thread has already been consumed by the pthread_detach
// interceptor.
Tid tid = ctx->thread_registry.FindThread(
[](ThreadContextBase* tctx, void* arg) {
return tctx->user_id == (uptr)arg;
},
th);
if (tid == kInvalidTid) {
return REAL(pthread_join)(th, ret);
}
}
#endif
Tid tid = ThreadConsumeTid(thr, pc, (uptr)th);
ThreadIgnoreBegin(thr, pc);
int res = BLOCK_REAL(pthread_join)(th, ret);
@@ -1664,6 +1668,14 @@ TSAN_INTERCEPTOR(int, pthread_barrier_wait, void *b) {
TSAN_INTERCEPTOR(int, pthread_once, void *o, void (*f)()) {
SCOPED_INTERCEPTOR_RAW(pthread_once, o, f);
#if SANITIZER_APPLE && !SANITIZER_GO
if (flags()->lock_during_write != kLockDuringAllWrites &&
cur_thread_init()->in_internal_write_call) {
// This is needed to make it through process launch without hanging
f();
return 0;
}
#endif
if (o == 0 || f == 0)
return errno_EINVAL;
atomic_uint32_t *a;
@@ -2141,13 +2153,29 @@ static void ReportErrnoSpoiling(ThreadState *thr, uptr pc, int sig) {
// StackTrace::GetNestInstructionPc(pc) is used because return address is
// expected, OutputReport() will undo this.
ObtainCurrentStack(thr, StackTrace::GetNextInstructionPc(pc), &stack);
ThreadRegistryLock l(&ctx->thread_registry);
ScopedReport rep(ReportTypeErrnoInSignal);
rep.SetSigNum(sig);
if (!IsFiredSuppression(ctx, ReportTypeErrnoInSignal, stack)) {
rep.AddStack(stack, true);
OutputReport(thr, rep);
// Use alloca, because malloc during signal handling deadlocks
ScopedReport *rep = (ScopedReport *)__builtin_alloca(sizeof(ScopedReport));
bool suppressed;
// Take a new scope as Apple platforms require the below locks released
// before symbolizing in order to avoid a deadlock
{
ThreadRegistryLock l(&ctx->thread_registry);
new (rep) ScopedReport(ReportTypeErrnoInSignal);
rep->SetSigNum(sig);
suppressed = IsFiredSuppression(ctx, ReportTypeErrnoInSignal, stack);
if (!suppressed)
rep->AddStack(stack, true);
#if SANITIZER_APPLE
} // Close this scope to release the locks before writing report
#endif
if (!suppressed)
OutputReport(thr, *rep);
// Need to manually destroy this because we used placement new to allocate
rep->~ScopedReport();
#if !SANITIZER_APPLE
}
#endif
}
static void CallUserSignalHandler(ThreadState *thr, bool sync, bool acquire,
@@ -2411,7 +2439,11 @@ TSAN_INTERCEPTOR(int, vfork, int fake) {
}
#endif
#if SANITIZER_LINUX
#if SANITIZER_LINUX && !SANITIZER_ANDROID
// Bionic's pthread_create internally calls clone. When the CLONE_THREAD flag is
// set, clone does not create a new process but a new thread. This is a
// workaround for Android. Disabling the interception of clone solves the
// problem in most scenarios.
TSAN_INTERCEPTOR(int, clone, int (*fn)(void *), void *stack, int flags,
void *arg, int *parent_tid, void *tls, pid_t *child_tid) {
SCOPED_INTERCEPTOR_RAW(clone, fn, stack, flags, arg, parent_tid, tls,
@@ -2888,12 +2920,12 @@ TSAN_INTERCEPTOR(void, _lwp_exit) {
#endif
#if SANITIZER_FREEBSD
TSAN_INTERCEPTOR(void, thr_exit, tid_t *state) {
TSAN_INTERCEPTOR(void, thr_exit, ThreadID *state) {
SCOPED_TSAN_INTERCEPTOR(thr_exit, state);
DestroyThreadState();
REAL(thr_exit(state));
}
#define TSAN_MAYBE_INTERCEPT_THR_EXIT TSAN_INTERCEPT(thr_exit)
# define TSAN_MAYBE_INTERCEPT_THR_EXIT TSAN_INTERCEPT(thr_exit)
#else
#define TSAN_MAYBE_INTERCEPT_THR_EXIT
#endif
@@ -3024,12 +3056,26 @@ void InitializeInterceptors() {
TSAN_INTERCEPT(pthread_timedjoin_np);
#endif
TSAN_INTERCEPT_VER(pthread_cond_init, PTHREAD_ABI_BASE);
TSAN_INTERCEPT_VER(pthread_cond_signal, PTHREAD_ABI_BASE);
TSAN_INTERCEPT_VER(pthread_cond_broadcast, PTHREAD_ABI_BASE);
TSAN_INTERCEPT_VER(pthread_cond_wait, PTHREAD_ABI_BASE);
TSAN_INTERCEPT_VER(pthread_cond_timedwait, PTHREAD_ABI_BASE);
TSAN_INTERCEPT_VER(pthread_cond_destroy, PTHREAD_ABI_BASE);
// In glibc versions older than 2.36, dlsym(RTLD_NEXT, "pthread_cond_init")
// may return an outdated symbol (max(2.2,base_version)) if the port was
// introduced before 2.3.2 (when the new pthread_cond_t was introduced).
#if SANITIZER_GLIBC && !__GLIBC_PREREQ(2, 36) && \
(defined(__x86_64__) || defined(__mips__) || SANITIZER_PPC64V1 || \
defined(__s390x__))
INTERCEPT_FUNCTION_VER(pthread_cond_init, "GLIBC_2.3.2");
INTERCEPT_FUNCTION_VER(pthread_cond_signal, "GLIBC_2.3.2");
INTERCEPT_FUNCTION_VER(pthread_cond_broadcast, "GLIBC_2.3.2");
INTERCEPT_FUNCTION_VER(pthread_cond_wait, "GLIBC_2.3.2");
INTERCEPT_FUNCTION_VER(pthread_cond_timedwait, "GLIBC_2.3.2");
INTERCEPT_FUNCTION_VER(pthread_cond_destroy, "GLIBC_2.3.2");
#else
INTERCEPT_FUNCTION(pthread_cond_init);
INTERCEPT_FUNCTION(pthread_cond_signal);
INTERCEPT_FUNCTION(pthread_cond_broadcast);
INTERCEPT_FUNCTION(pthread_cond_wait);
INTERCEPT_FUNCTION(pthread_cond_timedwait);
INTERCEPT_FUNCTION(pthread_cond_destroy);
#endif
TSAN_MAYBE_PTHREAD_COND_CLOCKWAIT;
@@ -3120,7 +3166,7 @@ void InitializeInterceptors() {
TSAN_INTERCEPT(fork);
TSAN_INTERCEPT(vfork);
#if SANITIZER_LINUX
#if SANITIZER_LINUX && !SANITIZER_ANDROID
TSAN_INTERCEPT(clone);
#endif
#if !SANITIZER_ANDROID
+3 -3
View File
@@ -16,7 +16,7 @@
#define TSAN_INTERFACE_H
#include <sanitizer_common/sanitizer_internal_defs.h>
using __sanitizer::tid_t;
using __sanitizer::ThreadID;
using __sanitizer::uptr;
// This header should NOT include any other headers.
@@ -175,7 +175,7 @@ int __tsan_get_report_mutex(void *report, uptr idx, uptr *mutex_id, void **addr,
// Returns information about threads included in the report.
SANITIZER_INTERFACE_ATTRIBUTE
int __tsan_get_report_thread(void *report, uptr idx, int *tid, tid_t *os_id,
int __tsan_get_report_thread(void *report, uptr idx, int *tid, ThreadID *os_id,
int *running, const char **name, int *parent_tid,
void **trace, uptr trace_size);
@@ -192,7 +192,7 @@ const char *__tsan_locate_address(uptr addr, char *name, uptr name_size,
// Returns the allocation stack for a heap pointer.
SANITIZER_INTERFACE_ATTRIBUTE
int __tsan_get_alloc_stack(uptr addr, uptr *trace, uptr size, int *thread_id,
tid_t *os_id);
ThreadID *os_id);
#endif // SANITIZER_GO
+23 -9
View File
@@ -437,16 +437,30 @@ void __tsan_mutex_post_divert(void *addr, unsigned flagz) {
}
static void ReportMutexHeldWrongContext(ThreadState *thr, uptr pc) {
ThreadRegistryLock l(&ctx->thread_registry);
ScopedReport rep(ReportTypeMutexHeldWrongContext);
for (uptr i = 0; i < thr->mset.Size(); ++i) {
MutexSet::Desc desc = thr->mset.Get(i);
rep.AddMutex(desc.addr, desc.stack_id);
// Use alloca, because malloc during signal handling deadlocks
ScopedReport *rep = (ScopedReport *)__builtin_alloca(sizeof(ScopedReport));
// Take a new scope as Apple platforms require the below locks released
// before symbolizing in order to avoid a deadlock
{
ThreadRegistryLock l(&ctx->thread_registry);
new (rep) ScopedReport(ReportTypeMutexHeldWrongContext);
for (uptr i = 0; i < thr->mset.Size(); ++i) {
MutexSet::Desc desc = thr->mset.Get(i);
rep->AddMutex(desc.addr, desc.stack_id);
}
VarSizeStackTrace trace;
ObtainCurrentStack(thr, pc, &trace);
rep->AddStack(trace, true);
#if SANITIZER_APPLE
} // Close this scope to release the locks
#endif
OutputReport(thr, *rep);
// Need to manually destroy this because we used placement new to allocate
rep->~ScopedReport();
#if !SANITIZER_APPLE
}
VarSizeStackTrace trace;
ObtainCurrentStack(thr, pc, &trace);
rep.AddStack(trace, true);
OutputReport(thr, rep);
#endif
}
INTERFACE_ATTRIBUTE
+18 -4
View File
@@ -182,10 +182,24 @@ static void SignalUnsafeCall(ThreadState *thr, uptr pc) {
ObtainCurrentStack(thr, pc, &stack);
if (IsFiredSuppression(ctx, ReportTypeSignalUnsafe, stack))
return;
ThreadRegistryLock l(&ctx->thread_registry);
ScopedReport rep(ReportTypeSignalUnsafe);
rep.AddStack(stack, true);
OutputReport(thr, rep);
// Use alloca, because malloc during signal handling deadlocks
ScopedReport *rep = (ScopedReport *)__builtin_alloca(sizeof(ScopedReport));
// Take a new scope as Apple platforms require the below locks released
// before symbolizing in order to avoid a deadlock
{
ThreadRegistryLock l(&ctx->thread_registry);
new (rep) ScopedReport(ReportTypeSignalUnsafe);
rep->AddStack(stack, true);
#if SANITIZER_APPLE
} // Close this scope to release the locks
#endif
OutputReport(thr, *rep);
// Need to manually destroy this because we used placement new to allocate
rep->~ScopedReport();
#if !SANITIZER_APPLE
}
#endif
}
+44 -7
View File
@@ -681,6 +681,32 @@ struct MappingGoMips64_47 {
static const uptr kShadowAdd = 0x200000000000ull;
};
/* Go on linux/riscv64 (39-bit VMA)
0000 0001 0000 - 000f 0000 0000: executable and heap (60 GiB)
000f 0000 0000 - 0010 0000 0000: -
0010 0000 0000 - 0030 0000 0000: shadow - 128 GiB ( ~ 2 * app)
0030 0000 0000 - 0038 0000 0000: metainfo - 32 GiB ( ~ 0.5 * app)
0038 0000 0000 - 0040 0000 0000: -
*/
struct MappingGoRiscv64_39 {
static const uptr kMetaShadowBeg = 0x003000000000ull;
static const uptr kMetaShadowEnd = 0x003800000000ull;
static const uptr kShadowBeg = 0x001000000000ull;
static const uptr kShadowEnd = 0x003000000000ull;
static const uptr kLoAppMemBeg = 0x000000010000ull;
static const uptr kLoAppMemEnd = 0x000f00000000ull;
static const uptr kMidAppMemBeg = 0;
static const uptr kMidAppMemEnd = 0;
static const uptr kHiAppMemBeg = 0;
static const uptr kHiAppMemEnd = 0;
static const uptr kHeapMemBeg = 0;
static const uptr kHeapMemEnd = 0;
static const uptr kVdsoBeg = 0;
static const uptr kShadowMsk = 0;
static const uptr kShadowXor = 0;
static const uptr kShadowAdd = 0x001000000000ull;
};
/* Go on linux/riscv64 (48-bit VMA)
0000 0001 0000 - 00e0 0000 0000: executable and heap (896 GiB)
00e0 0000 0000 - 2000 0000 0000: -
@@ -689,13 +715,13 @@ struct MappingGoMips64_47 {
3000 0000 0000 - 3100 0000 0000: metainfo - 1 TiB ( ~ 1 * app)
3100 0000 0000 - 8000 0000 0000: -
*/
struct MappingGoRiscv64 {
struct MappingGoRiscv64_48 {
static const uptr kMetaShadowBeg = 0x300000000000ull;
static const uptr kMetaShadowEnd = 0x310000000000ull;
static const uptr kShadowBeg = 0x200000000000ull;
static const uptr kShadowEnd = 0x240000000000ull;
static const uptr kLoAppMemBeg = 0x000000010000ull;
static const uptr kLoAppMemEnd = 0x000e00000000ull;
static const uptr kLoAppMemEnd = 0x00e000000000ull;
static const uptr kMidAppMemBeg = 0;
static const uptr kMidAppMemEnd = 0;
static const uptr kHiAppMemBeg = 0;
@@ -756,7 +782,12 @@ ALWAYS_INLINE auto SelectMapping(Arg arg) {
# elif defined(__loongarch_lp64)
return Func::template Apply<MappingGoLoongArch64_47>(arg);
# elif SANITIZER_RISCV64
return Func::template Apply<MappingGoRiscv64>(arg);
switch (vmaSize) {
case 39:
return Func::template Apply<MappingGoRiscv64_39>(arg);
case 48:
return Func::template Apply<MappingGoRiscv64_48>(arg);
}
# elif SANITIZER_WINDOWS
return Func::template Apply<MappingGoWindows>(arg);
# else
@@ -827,7 +858,8 @@ void ForEachMapping() {
Func::template Apply<MappingGoAarch64>();
Func::template Apply<MappingGoLoongArch64_47>();
Func::template Apply<MappingGoMips64_47>();
Func::template Apply<MappingGoRiscv64>();
Func::template Apply<MappingGoRiscv64_39>();
Func::template Apply<MappingGoRiscv64_48>();
Func::template Apply<MappingGoS390x>();
}
@@ -926,7 +958,9 @@ struct IsAppMemImpl {
};
ALWAYS_INLINE
bool IsAppMem(uptr mem) { return SelectMapping<IsAppMemImpl>(mem); }
bool IsAppMem(uptr mem) {
return SelectMapping<IsAppMemImpl>(STRIP_MTE_TAG(mem));
}
struct IsShadowMemImpl {
template <typename Mapping>
@@ -965,7 +999,8 @@ struct MemToShadowImpl {
ALWAYS_INLINE
RawShadow *MemToShadow(uptr x) {
return reinterpret_cast<RawShadow *>(SelectMapping<MemToShadowImpl>(x));
return reinterpret_cast<RawShadow*>(
SelectMapping<MemToShadowImpl>(STRIP_MTE_TAG(x)));
}
struct MemToMetaImpl {
@@ -979,7 +1014,9 @@ struct MemToMetaImpl {
};
ALWAYS_INLINE
u32 *MemToMeta(uptr x) { return SelectMapping<MemToMetaImpl>(x); }
u32* MemToMeta(uptr x) {
return SelectMapping<MemToMetaImpl>(STRIP_MTE_TAG(x));
}
struct ShadowToMemImpl {
template <typename Mapping>
+42 -14
View File
@@ -393,9 +393,9 @@ void InitializePlatformEarly() {
Die();
}
# else
if (vmaSize != 48) {
if (vmaSize != 39 && vmaSize != 48) {
Printf("FATAL: ThreadSanitizer: unsupported VMA range\n");
Printf("FATAL: Found %zd - Supported 48\n", vmaSize);
Printf("FATAL: Found %zd - Supported 39 and 48\n", vmaSize);
Die();
}
# endif
@@ -415,7 +415,7 @@ void InitializePlatform() {
// is not compiled with -pie.
#if !SANITIZER_GO
{
# if SANITIZER_LINUX && (defined(__aarch64__) || defined(__loongarch_lp64))
# if INIT_LONGJMP_XOR_KEY
// Initialize the xor key used in {sig}{set,long}jump.
InitializeLongjmpXorKey();
# endif
@@ -486,8 +486,20 @@ int ExtractRecvmsgFDs(void *msgp, int *fds, int nfd) {
// Reverse operation of libc stack pointer mangling
static uptr UnmangleLongJmpSp(uptr mangled_sp) {
#if defined(__x86_64__)
# if SANITIZER_LINUX
# if SANITIZER_ANDROID && INIT_LONGJMP_XOR_KEY
if (longjmp_xor_key == 0) {
// bionic libc initialization process: __libc_init_globals ->
// __libc_init_vdso (calls strcmp) -> __libc_init_setjmp_cookie. strcmp is
// intercepted by TSan, so during TSan initialization the setjmp_cookie
// remains uninitialized. On Android, longjmp_xor_key must be set on first
// use.
InitializeLongjmpXorKey();
CHECK_NE(longjmp_xor_key, 0);
}
# endif
# if defined(__x86_64__)
# if SANITIZER_LINUX
// Reverse of:
// xor %fs:0x30, %rsi
// rol $0x11, %rsi
@@ -542,13 +554,23 @@ static uptr UnmangleLongJmpSp(uptr mangled_sp) {
# else
# define LONG_JMP_SP_ENV_SLOT 2
# endif
#elif SANITIZER_LINUX
# ifdef __aarch64__
# define LONG_JMP_SP_ENV_SLOT 13
# elif defined(__loongarch__)
# define LONG_JMP_SP_ENV_SLOT 1
# elif defined(__mips64)
# define LONG_JMP_SP_ENV_SLOT 1
# elif SANITIZER_ANDROID
# ifdef __aarch64__
# define LONG_JMP_SP_ENV_SLOT 3
# elif SANITIZER_RISCV64
# define LONG_JMP_SP_ENV_SLOT 3
# elif defined(__x86_64__)
# define LONG_JMP_SP_ENV_SLOT 6
# else
# error unsupported
# endif
# elif SANITIZER_LINUX
# ifdef __aarch64__
# define LONG_JMP_SP_ENV_SLOT 13
# elif defined(__loongarch__)
# define LONG_JMP_SP_ENV_SLOT 1
# elif defined(__mips64)
# define LONG_JMP_SP_ENV_SLOT 1
# elif SANITIZER_RISCV64
# define LONG_JMP_SP_ENV_SLOT 13
# elif defined(__s390x__)
@@ -556,7 +578,7 @@ static uptr UnmangleLongJmpSp(uptr mangled_sp) {
# else
# define LONG_JMP_SP_ENV_SLOT 6
# endif
#endif
# endif
uptr ExtractLongJmpSp(uptr *env) {
uptr mangled_sp = env[LONG_JMP_SP_ENV_SLOT];
@@ -653,7 +675,13 @@ ThreadState *cur_thread() {
}
CHECK_EQ(0, internal_sigprocmask(SIG_SETMASK, &oldset, nullptr));
}
return thr;
// Skia calls mallopt(M_THREAD_DISABLE_MEM_INIT, 1), which sets the least
// significant bit of TLS_SLOT_SANITIZER to 1. Scudo allocator uses this bit
// as a flag to disable memory initialization. This is a workaround to get the
// correct ThreadState pointer.
uptr addr = reinterpret_cast<uptr>(thr);
return reinterpret_cast<ThreadState*>(addr & ~1ULL);
}
void set_cur_thread(ThreadState *thr) {
+16 -3
View File
@@ -226,9 +226,20 @@ static void ThreadTerminateCallback(uptr thread) {
void InitializePlatformEarly() {
# if !SANITIZER_GO && SANITIZER_IOS
uptr max_vm = GetMaxUserVirtualAddress() + 1;
if (max_vm != HiAppMemEnd()) {
Printf("ThreadSanitizer: unsupported vm address limit %p, expected %p.\n",
(void *)max_vm, (void *)HiAppMemEnd());
if (max_vm < HiAppMemEnd()) {
Report(
"ThreadSanitizer: Unsupported virtual memory layout:\n\tVM address "
"limit = %p\n\tExpected %p.\n",
(void*)max_vm, (void*)HiAppMemEnd());
Die();
}
// In some configurations, the max_vm is expanded, but much of this space is
// already mapped. TSAN will not work in this configuration.
if (!MemoryRangeIsAvailable(HiAppMemEnd() - 1, HiAppMemEnd() - 1)) {
Report(
"ThreadSanitizer: Unsupported virtual memory layout: Address %p is "
"already mapped.\n",
(void*)(HiAppMemEnd() - 1));
Die();
}
#endif
@@ -248,7 +259,9 @@ void InitializePlatform() {
ThreadEventCallbacks callbacks = {
.create = ThreadCreateCallback,
.start = nullptr,
.terminate = ThreadTerminateCallback,
.destroy = nullptr,
};
InstallPthreadIntrospectionHook(callbacks);
#endif
+14 -1
View File
@@ -12,6 +12,8 @@
#ifndef TSAN_REPORT_H
#define TSAN_REPORT_H
#include "sanitizer_common/sanitizer_internal_defs.h"
#include "sanitizer_common/sanitizer_stacktrace.h"
#include "sanitizer_common/sanitizer_symbolizer.h"
#include "sanitizer_common/sanitizer_thread_registry.h"
#include "sanitizer_common/sanitizer_vector.h"
@@ -56,6 +58,7 @@ struct ReportMop {
bool atomic;
uptr external_tag;
Vector<ReportMopMutex> mset;
StackTrace stack_trace;
ReportStack *stack;
ReportMop();
@@ -79,25 +82,34 @@ struct ReportLocation {
int fd = 0;
bool fd_closed = false;
bool suppressable = false;
StackID stack_id = 0;
ReportStack *stack = nullptr;
};
struct ReportThread {
Tid id;
tid_t os_id;
ThreadID os_id;
bool running;
ThreadType thread_type;
char *name;
Tid parent_tid;
StackID stack_id;
ReportStack *stack;
bool suppressable;
};
struct ReportMutex {
int id;
uptr addr;
StackID stack_id;
ReportStack *stack;
};
struct AddedLocationAddr {
uptr addr;
usize locs_idx;
};
class ReportDesc {
public:
ReportType typ;
@@ -105,6 +117,7 @@ class ReportDesc {
Vector<ReportStack*> stacks;
Vector<ReportMop*> mops;
Vector<ReportLocation*> locs;
Vector<AddedLocationAddr> added_location_addrs;
Vector<ReportMutex*> mutexes;
Vector<ReportThread*> threads;
Vector<Tid> unique_tids;
+14
View File
@@ -40,6 +40,13 @@ SANITIZER_WEAK_DEFAULT_IMPL
void __tsan_test_only_on_fork() {}
#endif
#if SANITIZER_APPLE && !SANITIZER_GO
// Override weak symbol from sanitizer_common
extern void __tsan_set_in_internal_write_call(bool value) {
__tsan::cur_thread_init()->in_internal_write_call = value;
}
#endif
namespace __tsan {
#if !SANITIZER_GO
@@ -893,6 +900,13 @@ void ForkChildAfter(ThreadState* thr, uptr pc, bool start_thread) {
ThreadIgnoreBegin(thr, pc);
ThreadIgnoreSyncBegin(thr, pc);
}
# if SANITIZER_APPLE && !SANITIZER_GO
// This flag can have inheritance disabled - we are the child so act
// accordingly
if (flags()->lock_during_write == kNoLockDuringWritesCurrentProcess)
flags()->lock_during_write = kLockDuringAllWrites;
# endif
}
#endif
+7 -2
View File
@@ -236,6 +236,10 @@ struct alignas(SANITIZER_CACHE_LINE_SIZE) ThreadState {
const ReportDesc *current_report;
#if SANITIZER_APPLE && !SANITIZER_GO
bool in_internal_write_call;
#endif
explicit ThreadState(Tid tid);
};
@@ -420,6 +424,7 @@ class ScopedReportBase {
void AddSleep(StackID stack_id);
void SetCount(int count);
void SetSigNum(int sig);
void SymbolizeStackElems(void);
const ReportDesc *GetReport() const;
@@ -498,7 +503,7 @@ void ForkChildAfter(ThreadState *thr, uptr pc, bool start_thread);
void ReportRace(ThreadState *thr, RawShadow *shadow_mem, Shadow cur, Shadow old,
AccessType typ);
bool OutputReport(ThreadState *thr, const ScopedReport &srep);
bool OutputReport(ThreadState *thr, ScopedReport &srep);
bool IsFiredSuppression(Context *ctx, ReportType type, StackTrace trace);
bool IsExpectedReport(uptr addr, uptr size);
@@ -559,7 +564,7 @@ void ThreadIgnoreSyncBegin(ThreadState *thr, uptr pc);
void ThreadIgnoreSyncEnd(ThreadState *thr);
Tid ThreadCreate(ThreadState *thr, uptr pc, uptr uid, bool detached);
void ThreadStart(ThreadState *thr, Tid tid, tid_t os_id,
void ThreadStart(ThreadState *thr, Tid tid, ThreadID os_id,
ThreadType thread_type);
void ThreadFinish(ThreadState *thr);
Tid ThreadConsumeTid(ThreadState *thr, uptr pc, uptr uid);
+3 -5
View File
@@ -4,10 +4,8 @@
#include "sanitizer_common/sanitizer_asm.h"
#include "builtins/assembly.h"
#if !defined(__APPLE__)
.section .text
#else
.section __TEXT,__text
TEXT_SECTION
#if defined(__APPLE__)
.align 3
#endif
@@ -222,6 +220,6 @@ ASM_SIZE(ASM_SYMBOL_INTERCEPTOR(__sigsetjmp))
NO_EXEC_STACK_DIRECTIVE
GNU_PROPERTY_BTI_PAC
GNU_PROPERTY_BTI_PAC_GCS
#endif
+8 -3
View File
@@ -419,6 +419,11 @@ NOINLINE void TraceRestartMemoryAccess(ThreadState* thr, uptr pc, uptr addr,
ALWAYS_INLINE USED void MemoryAccess(ThreadState* thr, uptr pc, uptr addr,
uptr size, AccessType typ) {
#if SANITIZER_APPLE && !SANITIZER_GO
// Swift symbolizer can be intercepted and deadlock without this
if (thr->in_symbolizer)
return;
#endif
RawShadow* shadow_mem = MemToShadow(addr);
UNUSED char memBuf[4][64];
DPrintf2("#%d: Access: %d@%d %p/%zd typ=0x%x {%s, %s, %s, %s}\n", thr->tid,
@@ -684,7 +689,7 @@ void MemoryAccessRangeT(ThreadState* thr, uptr pc, uptr addr, uptr size) {
DCHECK(IsAppMem(addr + size - 1));
}
if (!IsShadowMem(shadow_mem)) {
Printf("Bad shadow start addr: %p (%p)\n", shadow_mem, (void*)addr);
Printf("Bad shadow start addr: %p (%p)\n", (void*)shadow_mem, (void*)addr);
DCHECK(IsShadowMem(shadow_mem));
}
@@ -693,12 +698,12 @@ void MemoryAccessRangeT(ThreadState* thr, uptr pc, uptr addr, uptr size) {
RawShadow* shadow_mem_end =
shadow_mem + rounded_size / kShadowCell * kShadowCnt;
if (!IsShadowMem(shadow_mem_end - 1)) {
Printf("Bad shadow end addr: %p (%p)\n", shadow_mem_end - 1,
Printf("Bad shadow end addr: %p (%p)\n", (void*)(shadow_mem_end - 1),
(void*)(addr + size - 1));
Printf(
"Shadow start addr (ok): %p (%p); size: 0x%zx; rounded_size: 0x%zx; "
"kShadowMultiplier: %zx\n",
shadow_mem, (void*)addr, size, rounded_size, kShadowMultiplier);
(void*)shadow_mem, (void*)addr, size, rounded_size, kShadowMultiplier);
DCHECK(IsShadowMem(shadow_mem_end - 1));
}
#endif
+2
View File
@@ -3,6 +3,8 @@
#include "sanitizer_common/sanitizer_asm.h"
.att_syntax
#if !defined(__APPLE__)
.section .text
#else
+94 -49
View File
@@ -11,14 +11,15 @@
//===----------------------------------------------------------------------===//
#include <sanitizer_common/sanitizer_deadlock_detector_interface.h>
#include <sanitizer_common/sanitizer_placement_new.h>
#include <sanitizer_common/sanitizer_stackdepot.h>
#include "tsan_rtl.h"
#include "tsan_flags.h"
#include "tsan_sync.h"
#include "tsan_report.h"
#include "tsan_symbolize.h"
#include "tsan_platform.h"
#include "tsan_report.h"
#include "tsan_rtl.h"
#include "tsan_symbolize.h"
#include "tsan_sync.h"
namespace __tsan {
@@ -55,14 +56,28 @@ static void ReportMutexMisuse(ThreadState *thr, uptr pc, ReportType typ,
return;
if (!ShouldReport(thr, typ))
return;
ThreadRegistryLock l(&ctx->thread_registry);
ScopedReport rep(typ);
rep.AddMutex(addr, creation_stack_id);
VarSizeStackTrace trace;
ObtainCurrentStack(thr, pc, &trace);
rep.AddStack(trace, true);
rep.AddLocation(addr, 1);
OutputReport(thr, rep);
// Use alloca, because malloc during signal handling deadlocks
ScopedReport *rep = (ScopedReport *)__builtin_alloca(sizeof(ScopedReport));
// Take a new scope as Apple platforms require the below locks released
// before symbolizing in order to avoid a deadlock
{
ThreadRegistryLock l(&ctx->thread_registry);
new (rep) ScopedReport(typ);
rep->AddMutex(addr, creation_stack_id);
VarSizeStackTrace trace;
ObtainCurrentStack(thr, pc, &trace);
rep->AddStack(trace, true);
rep->AddLocation(addr, 1);
#if SANITIZER_APPLE
} // Close this scope to release the locks
#endif
OutputReport(thr, *rep);
// Need to manually destroy this because we used placement new to allocate
rep->~ScopedReport();
#if !SANITIZER_APPLE
}
#endif
}
static void RecordMutexLock(ThreadState *thr, uptr pc, uptr addr,
@@ -528,51 +543,81 @@ void AfterSleep(ThreadState *thr, uptr pc) {
void ReportDeadlock(ThreadState *thr, uptr pc, DDReport *r) {
if (r == 0 || !ShouldReport(thr, ReportTypeDeadlock))
return;
ThreadRegistryLock l(&ctx->thread_registry);
ScopedReport rep(ReportTypeDeadlock);
for (int i = 0; i < r->n; i++) {
rep.AddMutex(r->loop[i].mtx_ctx0, r->loop[i].stk[0]);
rep.AddUniqueTid((int)r->loop[i].thr_ctx);
rep.AddThread((int)r->loop[i].thr_ctx);
}
uptr dummy_pc = 0x42;
for (int i = 0; i < r->n; i++) {
for (int j = 0; j < (flags()->second_deadlock_stack ? 2 : 1); j++) {
u32 stk = r->loop[i].stk[j];
if (stk && stk != kInvalidStackID) {
rep.AddStack(StackDepotGet(stk), true);
} else {
// Sometimes we fail to extract the stack trace (FIXME: investigate),
// but we should still produce some stack trace in the report.
rep.AddStack(StackTrace(&dummy_pc, 1), true);
// Use alloca, because malloc during signal handling deadlocks
ScopedReport *rep = (ScopedReport *)__builtin_alloca(sizeof(ScopedReport));
// Take a new scope as Apple platforms require the below locks released
// before symbolizing in order to avoid a deadlock
{
ThreadRegistryLock l(&ctx->thread_registry);
new (rep) ScopedReport(ReportTypeDeadlock);
for (int i = 0; i < r->n; i++) {
rep->AddMutex(r->loop[i].mtx_ctx0, r->loop[i].stk[0]);
rep->AddUniqueTid((int)r->loop[i].thr_ctx);
rep->AddThread((int)r->loop[i].thr_ctx);
}
uptr dummy_pc = 0x42;
for (int i = 0; i < r->n; i++) {
for (int j = 0; j < (flags()->second_deadlock_stack ? 2 : 1); j++) {
u32 stk = r->loop[i].stk[j];
StackTrace stack;
if (stk && stk != kInvalidStackID) {
stack = StackDepotGet(stk);
} else {
// Sometimes we fail to extract the stack trace (FIXME: investigate),
// but we should still produce some stack trace in the report.
stack = StackTrace(&dummy_pc, 1);
}
rep->AddStack(stack, true);
}
}
#if SANITIZER_APPLE
} // Close this scope to release the locks
#endif
OutputReport(thr, *rep);
// Need to manually destroy this because we used placement new to allocate
rep->~ScopedReport();
#if !SANITIZER_APPLE
}
OutputReport(thr, rep);
#endif
}
void ReportDestroyLocked(ThreadState *thr, uptr pc, uptr addr,
FastState last_lock, StackID creation_stack_id) {
// We need to lock the slot during RestoreStack because it protects
// the slot journal.
Lock slot_lock(&ctx->slots[static_cast<uptr>(last_lock.sid())].mtx);
ThreadRegistryLock l0(&ctx->thread_registry);
Lock slots_lock(&ctx->slot_mtx);
ScopedReport rep(ReportTypeMutexDestroyLocked);
rep.AddMutex(addr, creation_stack_id);
VarSizeStackTrace trace;
ObtainCurrentStack(thr, pc, &trace);
rep.AddStack(trace, true);
// Use alloca, because malloc during signal handling deadlocks
ScopedReport *rep = (ScopedReport *)__builtin_alloca(sizeof(ScopedReport));
// Take a new scope as Apple platforms require the below locks released
// before symbolizing in order to avoid a deadlock
{
// We need to lock the slot during RestoreStack because it protects
// the slot journal.
Lock slot_lock(&ctx->slots[static_cast<uptr>(last_lock.sid())].mtx);
ThreadRegistryLock l0(&ctx->thread_registry);
Lock slots_lock(&ctx->slot_mtx);
new (rep) ScopedReport(ReportTypeMutexDestroyLocked);
rep->AddMutex(addr, creation_stack_id);
VarSizeStackTrace trace;
ObtainCurrentStack(thr, pc, &trace);
rep->AddStack(trace, true);
Tid tid;
DynamicMutexSet mset;
uptr tag;
if (!RestoreStack(EventType::kLock, last_lock.sid(), last_lock.epoch(), addr,
0, kAccessWrite, &tid, &trace, mset, &tag))
return;
rep.AddStack(trace, true);
rep.AddLocation(addr, 1);
OutputReport(thr, rep);
Tid tid;
DynamicMutexSet mset;
uptr tag;
if (!RestoreStack(EventType::kLock, last_lock.sid(), last_lock.epoch(),
addr, 0, kAccessWrite, &tid, &trace, mset, &tag))
return;
rep->AddStack(trace, true);
rep->AddLocation(addr, 1);
#if SANITIZER_APPLE
} // Close this scope to release the locks
#endif
OutputReport(thr, *rep);
// Need to manually destroy this because we used placement new to allocate
rep->~ScopedReport();
#if !SANITIZER_APPLE
}
#endif
}
} // namespace __tsan
+135 -68
View File
@@ -11,10 +11,12 @@
//===----------------------------------------------------------------------===//
#include "sanitizer_common/sanitizer_common.h"
#include "sanitizer_common/sanitizer_internal_defs.h"
#include "sanitizer_common/sanitizer_libc.h"
#include "sanitizer_common/sanitizer_placement_new.h"
#include "sanitizer_common/sanitizer_stackdepot.h"
#include "sanitizer_common/sanitizer_stacktrace.h"
#include "tsan_defs.h"
#include "tsan_fd.h"
#include "tsan_flags.h"
#include "tsan_mman.h"
@@ -109,7 +111,13 @@ static ReportStack *SymbolizeStack(StackTrace trace) {
// instruction.
if ((pc & kExternalPCBit) == 0)
pc1 = StackTrace::GetPreviousInstructionPc(pc);
SymbolizedStack *ent = SymbolizeCode(pc1);
SymbolizedStack* ent = SymbolizeCode(pc1, si == trace.size - 1);
#if SANITIZER_GO
if (ent == nullptr) {
// Go might have 0 frames for this PC (wrapper frames aren't reported).
continue;
}
#endif
CHECK_NE(ent, 0);
SymbolizedStack *last = ent;
while (last->next) {
@@ -187,10 +195,8 @@ void ScopedReportBase::AddMemoryAccess(uptr addr, uptr external_tag, Shadow s,
mop->size = size;
mop->write = !(typ & kAccessRead);
mop->atomic = typ & kAccessAtomic;
mop->stack = SymbolizeStack(stack);
mop->external_tag = external_tag;
if (mop->stack)
mop->stack->suppressable = true;
mop->stack_trace = stack;
for (uptr i = 0; i < mset->Size(); i++) {
MutexSet::Desc d = mset->Get(i);
int id = this->AddMutex(d.addr, d.stack_id);
@@ -199,6 +205,56 @@ void ScopedReportBase::AddMemoryAccess(uptr addr, uptr external_tag, Shadow s,
}
}
void ScopedReportBase::SymbolizeStackElems() {
// symbolize memory ops
for (usize i = 0, size = rep_->mops.Size(); i < size; i++) {
ReportMop *mop = rep_->mops[i];
mop->stack = SymbolizeStack(mop->stack_trace);
if (mop->stack)
mop->stack->suppressable = true;
}
// symbolize locations
for (usize i = 0, size = rep_->locs.Size(); i < size; i++) {
// added locations have a NULL placeholder - don't dereference them
if (ReportLocation *loc = rep_->locs[i])
loc->stack = SymbolizeStackId(loc->stack_id);
}
// symbolize any added locations
for (usize i = 0, size = rep_->added_location_addrs.Size(); i < size; i++) {
AddedLocationAddr *added_loc = &rep_->added_location_addrs[i];
if (ReportLocation *loc = SymbolizeData(added_loc->addr)) {
loc->suppressable = true;
rep_->locs[added_loc->locs_idx] = loc;
}
}
// Filter out any added location placeholders that could not be symbolized
usize j = 0;
for (usize i = 0, size = rep_->locs.Size(); i < size; i++) {
if (rep_->locs[i] != nullptr) {
rep_->locs[j] = rep_->locs[i];
j++;
}
}
rep_->locs.Resize(j);
// symbolize threads
for (usize i = 0, size = rep_->threads.Size(); i < size; i++) {
ReportThread *rt = rep_->threads[i];
rt->stack = SymbolizeStackId(rt->stack_id);
if (rt->stack)
rt->stack->suppressable = rt->suppressable;
}
// symbolize mutexes
for (usize i = 0, size = rep_->mutexes.Size(); i < size; i++) {
ReportMutex *rm = rep_->mutexes[i];
rm->stack = SymbolizeStackId(rm->stack_id);
}
}
void ScopedReportBase::AddUniqueTid(Tid unique_tid) {
rep_->unique_tids.PushBack(unique_tid);
}
@@ -216,10 +272,8 @@ void ScopedReportBase::AddThread(const ThreadContext *tctx, bool suppressable) {
rt->name = internal_strdup(tctx->name);
rt->parent_tid = tctx->parent_tid;
rt->thread_type = tctx->thread_type;
rt->stack = 0;
rt->stack = SymbolizeStackId(tctx->creation_stack_id);
if (rt->stack)
rt->stack->suppressable = suppressable;
rt->stack_id = tctx->creation_stack_id;
rt->suppressable = suppressable;
}
#if !SANITIZER_GO
@@ -270,7 +324,7 @@ int ScopedReportBase::AddMutex(uptr addr, StackID creation_stack_id) {
rep_->mutexes.PushBack(rm);
rm->id = rep_->mutexes.Size() - 1;
rm->addr = addr;
rm->stack = SymbolizeStackId(creation_stack_id);
rm->stack_id = creation_stack_id;
return rm->id;
}
@@ -288,7 +342,7 @@ void ScopedReportBase::AddLocation(uptr addr, uptr size) {
loc->fd_closed = closed;
loc->fd = fd;
loc->tid = creat_tid;
loc->stack = SymbolizeStackId(creat_stack);
loc->stack_id = creat_stack;
rep_->locs.PushBack(loc);
AddThread(creat_tid);
return;
@@ -310,7 +364,7 @@ void ScopedReportBase::AddLocation(uptr addr, uptr size) {
loc->heap_chunk_size = b->siz;
loc->external_tag = b->tag;
loc->tid = b->tid;
loc->stack = SymbolizeStackId(b->stk);
loc->stack_id = b->stk;
rep_->locs.PushBack(loc);
AddThread(b->tid);
return;
@@ -324,11 +378,8 @@ void ScopedReportBase::AddLocation(uptr addr, uptr size) {
AddThread(tctx);
}
#endif
if (ReportLocation *loc = SymbolizeData(addr)) {
loc->suppressable = true;
rep_->locs.PushBack(loc);
return;
}
rep_->added_location_addrs.PushBack({addr, rep_->locs.Size()});
rep_->locs.PushBack(nullptr);
}
#if !SANITIZER_GO
@@ -628,11 +679,12 @@ static bool HandleRacyStacks(ThreadState *thr, VarSizeStackTrace traces[2]) {
return false;
}
bool OutputReport(ThreadState *thr, const ScopedReport &srep) {
bool OutputReport(ThreadState *thr, ScopedReport &srep) {
// These should have been checked in ShouldReport.
// It's too late to check them here, we have already taken locks.
CHECK(flags()->report_bugs);
CHECK(!thr->suppress_reports);
srep.SymbolizeStackElems();
atomic_store_relaxed(&ctx->last_symbolize_time_ns, NanoTime());
const ReportDesc *rep = srep.GetReport();
CHECK_EQ(thr->current_report, nullptr);
@@ -761,65 +813,80 @@ void ReportRace(ThreadState *thr, RawShadow *shadow_mem, Shadow cur, Shadow old,
DynamicMutexSet mset1;
MutexSet *mset[kMop] = {&thr->mset, mset1};
// We need to lock the slot during RestoreStack because it protects
// the slot journal.
Lock slot_lock(&ctx->slots[static_cast<uptr>(s[1].sid())].mtx);
ThreadRegistryLock l0(&ctx->thread_registry);
Lock slots_lock(&ctx->slot_mtx);
if (SpuriousRace(old))
return;
if (!RestoreStack(EventType::kAccessExt, s[1].sid(), s[1].epoch(), addr1,
size1, typ1, &tids[1], &traces[1], mset[1], &tags[1])) {
StoreShadow(&ctx->last_spurious_race, old.raw());
return;
}
if (IsFiredSuppression(ctx, rep_typ, traces[1]))
return;
if (HandleRacyStacks(thr, traces))
return;
// If any of the accesses has a tag, treat this as an "external" race.
uptr tag = kExternalTagNone;
for (uptr i = 0; i < kMop; i++) {
if (tags[i] != kExternalTagNone) {
rep_typ = ReportTypeExternalRace;
tag = tags[i];
break;
// Use alloca, because malloc during signal handling deadlocks
ScopedReport *rep = (ScopedReport *)__builtin_alloca(sizeof(ScopedReport));
// Take a new scope as Apple platforms require the below locks released
// before symbolizing in order to avoid a deadlock
{
// We need to lock the slot during RestoreStack because it protects
// the slot journal.
Lock slot_lock(&ctx->slots[static_cast<uptr>(s[1].sid())].mtx);
ThreadRegistryLock l0(&ctx->thread_registry);
Lock slots_lock(&ctx->slot_mtx);
if (SpuriousRace(old))
return;
if (!RestoreStack(EventType::kAccessExt, s[1].sid(), s[1].epoch(), addr1,
size1, typ1, &tids[1], &traces[1], mset[1], &tags[1])) {
StoreShadow(&ctx->last_spurious_race, old.raw());
return;
}
}
ScopedReport rep(rep_typ, tag);
for (uptr i = 0; i < kMop; i++)
rep.AddMemoryAccess(addr, tags[i], s[i], tids[i], traces[i], mset[i]);
if (IsFiredSuppression(ctx, rep_typ, traces[1]))
return;
for (uptr i = 0; i < kMop; i++) {
ThreadContext *tctx = static_cast<ThreadContext *>(
ctx->thread_registry.GetThreadLocked(tids[i]));
rep.AddThread(tctx);
}
if (HandleRacyStacks(thr, traces))
return;
rep.AddLocation(addr_min, addr_max - addr_min);
if (flags()->print_full_thread_history) {
const ReportDesc *rep_desc = rep.GetReport();
for (uptr i = 0; i < rep_desc->threads.Size(); i++) {
Tid parent_tid = rep_desc->threads[i]->parent_tid;
if (parent_tid == kMainTid || parent_tid == kInvalidTid)
continue;
ThreadContext *parent_tctx = static_cast<ThreadContext *>(
ctx->thread_registry.GetThreadLocked(parent_tid));
rep.AddThread(parent_tctx);
// If any of the accesses has a tag, treat this as an "external" race.
uptr tag = kExternalTagNone;
for (uptr i = 0; i < kMop; i++) {
if (tags[i] != kExternalTagNone) {
rep_typ = ReportTypeExternalRace;
tag = tags[i];
break;
}
}
new (rep) ScopedReport(rep_typ, tag);
for (uptr i = 0; i < kMop; i++)
rep->AddMemoryAccess(addr, tags[i], s[i], tids[i], traces[i], mset[i]);
for (uptr i = 0; i < kMop; i++) {
ThreadContext *tctx = static_cast<ThreadContext *>(
ctx->thread_registry.GetThreadLocked(tids[i]));
rep->AddThread(tctx);
}
rep->AddLocation(addr_min, addr_max - addr_min);
if (flags()->print_full_thread_history) {
const ReportDesc *rep_desc = rep->GetReport();
for (uptr i = 0; i < rep_desc->threads.Size(); i++) {
Tid parent_tid = rep_desc->threads[i]->parent_tid;
if (parent_tid == kMainTid || parent_tid == kInvalidTid)
continue;
ThreadContext *parent_tctx = static_cast<ThreadContext *>(
ctx->thread_registry.GetThreadLocked(parent_tid));
rep->AddThread(parent_tctx);
}
}
}
#if !SANITIZER_GO
if (!((typ0 | typ1) & kAccessFree) &&
s[1].epoch() <= thr->last_sleep_clock.Get(s[1].sid()))
rep.AddSleep(thr->last_sleep_stack_id);
if (!((typ0 | typ1) & kAccessFree) &&
s[1].epoch() <= thr->last_sleep_clock.Get(s[1].sid()))
rep->AddSleep(thr->last_sleep_stack_id);
#endif
#if SANITIZER_APPLE
} // Close this scope to release the locks
#endif
OutputReport(thr, *rep);
// Need to manually destroy this because we used placement new to allocate
rep->~ScopedReport();
#if !SANITIZER_APPLE
}
#endif
OutputReport(thr, rep);
}
void PrintCurrentStack(ThreadState *thr, uptr pc) {
+31 -9
View File
@@ -88,15 +88,33 @@ void ThreadFinalize(ThreadState *thr) {
#if !SANITIZER_GO
if (!ShouldReport(thr, ReportTypeThreadLeak))
return;
ThreadRegistryLock l(&ctx->thread_registry);
Vector<ThreadLeak> leaks;
ctx->thread_registry.RunCallbackForEachThreadLocked(CollectThreadLeaks,
&leaks);
{
ThreadRegistryLock l(&ctx->thread_registry);
ctx->thread_registry.RunCallbackForEachThreadLocked(CollectThreadLeaks,
&leaks);
}
for (uptr i = 0; i < leaks.Size(); i++) {
ScopedReport rep(ReportTypeThreadLeak);
rep.AddThread(leaks[i].tctx, true);
rep.SetCount(leaks[i].count);
OutputReport(thr, rep);
// Use alloca, because malloc during signal handling deadlocks
ScopedReport *rep = (ScopedReport *)__builtin_alloca(sizeof(ScopedReport));
// Take a new scope as Apple platforms require the below locks released
// before symbolizing in order to avoid a deadlock
{
ThreadRegistryLock l(&ctx->thread_registry);
new (rep) ScopedReport(ReportTypeThreadLeak);
rep->AddThread(leaks[i].tctx, true);
rep->SetCount(leaks[i].count);
# if SANITIZER_APPLE
} // Close this scope to release the locks
# endif
OutputReport(thr, *rep);
// Need to manually destroy this because we used placement new to allocate
rep->~ScopedReport();
# if !SANITIZER_APPLE
}
# endif
}
#endif
}
@@ -149,7 +167,7 @@ struct OnStartedArgs {
uptr tls_size;
};
void ThreadStart(ThreadState *thr, Tid tid, tid_t os_id,
void ThreadStart(ThreadState *thr, Tid tid, ThreadID os_id,
ThreadType thread_type) {
ctx->thread_registry.StartThread(tid, os_id, thread_type, thr);
if (!thr->ignore_sync) {
@@ -188,10 +206,14 @@ void ThreadStart(ThreadState *thr, Tid tid, tid_t os_id,
}
#endif
#if !SANITIZER_GO
#if !SANITIZER_GO && !SANITIZER_ANDROID
// Don't imitate stack/TLS writes for the main thread,
// because its initialization is synchronized with all
// subsequent threads anyway.
// Because thr is created by MmapOrDie, the thr object
// is not in tls, the pointer to the thr object is in
// TLS_SLOT_SANITIZER slot. So skip this check on
// Android platform.
if (tid != kMainTid) {
if (stk_addr && stk_size) {
const uptr pc = StackTrace::GetNextInstructionPc(
+1 -1
View File
@@ -79,7 +79,7 @@ static void AddFrame(void *ctx, const char *function_name, const char *file,
info->column = column;
}
SymbolizedStack *SymbolizeCode(uptr addr) {
SymbolizedStack* SymbolizeCode(uptr addr, bool leaf) {
// Check if PC comes from non-native land.
if (addr & kExternalPCBit) {
SymbolizedStackBuilder ssb = {nullptr, nullptr, addr};
+1 -1
View File
@@ -19,7 +19,7 @@ namespace __tsan {
void EnterSymbolizer();
void ExitSymbolizer();
SymbolizedStack *SymbolizeCode(uptr addr);
SymbolizedStack* SymbolizeCode(uptr addr, bool leaf);
ReportLocation *SymbolizeData(uptr addr);
void SymbolizeFlush();
+1 -1
View File
@@ -190,7 +190,7 @@ struct Trace {
Mutex mtx;
IList<TraceHeader, &TraceHeader::trace_parts, TracePart> parts;
// First node non-queued into ctx->trace_part_recycle.
TracePart* local_head;
TracePart* local_head = nullptr;
// Final position in the last part for finished threads.
Event* final_pos = nullptr;
// Number of trace parts allocated on behalf of this trace specifically.