• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1 /* -*- mode: C; c-basic-offset: 3; -*- */
2 
3 /*--------------------------------------------------------------------*/
4 /*--- Implementation of POSIX signals.                 m_signals.c ---*/
5 /*--------------------------------------------------------------------*/
6 
7 /*
8    This file is part of Valgrind, a dynamic binary instrumentation
9    framework.
10 
11    Copyright (C) 2000-2017 Julian Seward
12       jseward@acm.org
13 
14    This program is free software; you can redistribute it and/or
15    modify it under the terms of the GNU General Public License as
16    published by the Free Software Foundation; either version 2 of the
17    License, or (at your option) any later version.
18 
19    This program is distributed in the hope that it will be useful, but
20    WITHOUT ANY WARRANTY; without even the implied warranty of
21    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
22    General Public License for more details.
23 
24    You should have received a copy of the GNU General Public License
25    along with this program; if not, write to the Free Software
26    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
27    02111-1307, USA.
28 
29    The GNU General Public License is contained in the file COPYING.
30 */
31 
32 /*
33    Signal handling.
34 
35    There are 4 distinct classes of signal:
36 
37    1. Synchronous, instruction-generated (SIGILL, FPE, BUS, SEGV and
38    TRAP): these are signals as a result of an instruction fault.  If
39    we get one while running client code, then we just do the
40    appropriate thing.  If it happens while running Valgrind code, then
41    it indicates a Valgrind bug.  Note that we "manually" implement
42    automatic stack growth, such that if a fault happens near the
43    client process stack, it is extended in the same way the kernel
44    would, and the fault is never reported to the client program.
45 
46    2. Asynchronous variants of the above signals: If the kernel tries
47    to deliver a sync signal while it is blocked, it just kills the
48    process.  Therefore, we can't block those signals if we want to be
49    able to report on bugs in Valgrind.  This means that we're also
50    open to receiving those signals from other processes, sent with
51    kill.  We could get away with just dropping them, since they aren't
52    really signals that processes send to each other.
53 
54    3. Synchronous, general signals.  If a thread/process sends itself
55    a signal with kill, its expected to be synchronous: ie, the signal
56    will have been delivered by the time the syscall finishes.
57 
58    4. Asynchronous, general signals.  All other signals, sent by
59    another process with kill.  These are generally blocked, except for
60    two special cases: we poll for them each time we're about to run a
61    thread for a time quanta, and while running blocking syscalls.
62 
63 
64    In addition, we reserve one signal for internal use: SIGVGKILL.
65    SIGVGKILL is used to terminate threads.  When one thread wants
66    another to exit, it will set its exitreason and send it SIGVGKILL
67    if it appears to be blocked in a syscall.
68 
69 
70    We use a kernel thread for each application thread.  When the
71    thread allows itself to be open to signals, it sets the thread
72    signal mask to what the client application set it to.  This means
73    that we get the kernel to do all signal routing: under Valgrind,
74    signals get delivered in the same way as in the non-Valgrind case
75    (the exception being for the sync signal set, since they're almost
76    always unblocked).
77  */
78 
79 /*
80    Some more details...
81 
82    First off, we take note of the client's requests (via sys_sigaction
83    and sys_sigprocmask) to set the signal state (handlers for each
84    signal, which are process-wide, + a mask for each signal, which is
85    per-thread).  This info is duly recorded in the SCSS (static Client
86    signal state) in m_signals.c, and if the client later queries what
87    the state is, we merely fish the relevant info out of SCSS and give
88    it back.
89 
90    However, we set the real signal state in the kernel to something
91    entirely different.  This is recorded in SKSS, the static Kernel
92    signal state.  What's nice (to the extent that anything is nice w.r.t
93    signals) is that there's a pure function to calculate SKSS from SCSS,
94    calculate_SKSS_from_SCSS.  So when the client changes SCSS then we
95    recompute the associated SKSS and apply any changes from the previous
96    SKSS through to the kernel.
97 
98    Now, that said, the general scheme we have now is, that regardless of
99    what the client puts into the SCSS (viz, asks for), what we would
100    like to do is as follows:
101 
102    (1) run code on the virtual CPU with all signals blocked
103 
104    (2) at convenient moments for us (that is, when the VCPU stops, and
105       control is back with the scheduler), ask the kernel "do you have
106       any signals for me?"  and if it does, collect up the info, and
107       deliver them to the client (by building sigframes).
108 
109    And that's almost what we do.  The signal polling is done by
110    VG_(poll_signals), which calls through to VG_(sigtimedwait_zero) to
111    do the dirty work.  (of which more later).
112 
113    By polling signals, rather than catching them, we get to deal with
114    them only at convenient moments, rather than having to recover from
115    taking a signal while generated code is running.
116 
117    Now unfortunately .. the above scheme only works for so-called async
118    signals.  An async signal is one which isn't associated with any
119    particular instruction, eg Control-C (SIGINT).  For those, it doesn't
120    matter if we don't deliver the signal to the client immediately; it
121    only matters that we deliver it eventually.  Hence polling is OK.
122 
123    But the other group -- sync signals -- are all related by the fact
124    that they are various ways for the host CPU to fail to execute an
125    instruction: SIGILL, SIGSEGV, SIGFPU.  And they can't be deferred,
126    because obviously if a host instruction can't execute, well then we
127    have to immediately do Plan B, whatever that is.
128 
129    So the next approximation of what happens is:
130 
131    (1) run code on vcpu with all async signals blocked
132 
133    (2) at convenient moments (when NOT running the vcpu), poll for async
134       signals.
135 
136    (1) and (2) together imply that if the host does deliver a signal to
137       async_signalhandler while the VCPU is running, something's
138       seriously wrong.
139 
140    (3) when running code on vcpu, don't block sync signals.  Instead
141       register sync_signalhandler and catch any such via that.  Of
142       course, that means an ugly recovery path if we do -- the
143       sync_signalhandler has to longjump, exiting out of the generated
144       code, and the assembly-dispatcher thingy that runs it, and gets
145       caught in m_scheduler, which then tells m_signals to deliver the
146       signal.
147 
148    Now naturally (ha ha) even that might be tolerable, but there's
149    something worse: dealing with signals delivered to threads in
150    syscalls.
151 
152    Obviously from the above, SKSS's signal mask (viz, what we really run
153    with) is way different from SCSS's signal mask (viz, what the client
154    thread thought it asked for).  (eg) It may well be that the client
155    did not block control-C, so that it just expects to drop dead if it
156    receives ^C whilst blocked in a syscall, but by default we are
157    running with all async signals blocked, and so that signal could be
158    arbitrarily delayed, or perhaps even lost (not sure).
159 
160    So what we have to do, when doing any syscall which SfMayBlock, is to
161    quickly switch in the SCSS-specified signal mask just before the
162    syscall, and switch it back just afterwards, and hope that we don't
163    get caught up in some weird race condition.  This is the primary
164    purpose of the ultra-magical pieces of assembly code in
165    coregrind/m_syswrap/syscall-<plat>.S
166 
167    -----------
168 
169    The ways in which V can come to hear of signals that need to be
170    forwarded to the client as are follows:
171 
172     sync signals: can arrive at any time whatsoever.  These are caught
173                   by sync_signalhandler
174 
175     async signals:
176 
177        if    running generated code
178        then  these are blocked, so we don't expect to catch them in
179              async_signalhandler
180 
181        else
182        if    thread is blocked in a syscall marked SfMayBlock
183        then  signals may be delivered to async_sighandler, since we
184              temporarily unblocked them for the duration of the syscall,
185              by using the real (SCSS) mask for this thread
186 
187        else  we're doing misc housekeeping activities (eg, making a translation,
188              washing our hair, etc).  As in the normal case, these signals are
189              blocked, but we can  and do poll for them using VG_(poll_signals).
190 
191    Now, re VG_(poll_signals), it polls the kernel by doing
192    VG_(sigtimedwait_zero).  This is trivial on Linux, since it's just a
193    syscall.  But on Darwin and AIX, we have to cobble together the
194    functionality in a tedious, longwinded and probably error-prone way.
195 
196    Finally, if a gdb is debugging the process under valgrind,
197    the signal can be ignored if gdb tells this. So, before resuming the
198    scheduler/delivering the signal, a call to VG_(gdbserver_report_signal)
199    is done. If this returns True, the signal is delivered.
200  */
201 
202 #include "pub_core_basics.h"
203 #include "pub_core_vki.h"
204 #include "pub_core_vkiscnums.h"
205 #include "pub_core_debuglog.h"
206 #include "pub_core_threadstate.h"
207 #include "pub_core_xarray.h"
208 #include "pub_core_clientstate.h"
209 #include "pub_core_aspacemgr.h"
210 #include "pub_core_errormgr.h"
211 #include "pub_core_gdbserver.h"
212 #include "pub_core_libcbase.h"
213 #include "pub_core_libcassert.h"
214 #include "pub_core_libcprint.h"
215 #include "pub_core_libcproc.h"
216 #include "pub_core_libcsignal.h"
217 #include "pub_core_machine.h"
218 #include "pub_core_mallocfree.h"
219 #include "pub_core_options.h"
220 #include "pub_core_scheduler.h"
221 #include "pub_core_signals.h"
222 #include "pub_core_sigframe.h"      // For VG_(sigframe_create)()
223 #include "pub_core_stacks.h"        // For VG_(change_stack)()
224 #include "pub_core_stacktrace.h"    // For VG_(get_and_pp_StackTrace)()
225 #include "pub_core_syscall.h"
226 #include "pub_core_syswrap.h"
227 #include "pub_core_tooliface.h"
228 #include "pub_core_coredump.h"
229 
230 
231 /* ---------------------------------------------------------------------
232    Forwards decls.
233    ------------------------------------------------------------------ */
234 
235 static void sync_signalhandler  ( Int sigNo, vki_siginfo_t *info,
236                                              struct vki_ucontext * );
237 static void async_signalhandler ( Int sigNo, vki_siginfo_t *info,
238                                              struct vki_ucontext * );
239 static void sigvgkill_handler	( Int sigNo, vki_siginfo_t *info,
240                                              struct vki_ucontext * );
241 
242 /* Maximum usable signal. */
243 Int VG_(max_signal) = _VKI_NSIG;
244 
245 #define N_QUEUED_SIGNALS	8
246 
247 typedef struct SigQueue {
248    Int	next;
249    vki_siginfo_t sigs[N_QUEUED_SIGNALS];
250 } SigQueue;
251 
252 /* ------ Macros for pulling stuff out of ucontexts ------ */
253 
254 /* Q: what does VG_UCONTEXT_SYSCALL_SYSRES do?  A: let's suppose the
255    machine context (uc) reflects the situation that a syscall had just
256    completed, quite literally -- that is, that the program counter was
257    now at the instruction following the syscall.  (or we're slightly
258    downstream, but we're sure no relevant register has yet changed
259    value.)  Then VG_UCONTEXT_SYSCALL_SYSRES returns a SysRes reflecting
260    the result of the syscall; it does this by fishing relevant bits of
261    the machine state out of the uc.  Of course if the program counter
262    was somewhere else entirely then the result is likely to be
263    meaningless, so the caller of VG_UCONTEXT_SYSCALL_SYSRES has to be
264    very careful to pay attention to the results only when it is sure
265    that the said constraint on the program counter is indeed valid. */
266 
267 #if defined(VGP_x86_linux)
268 #  define VG_UCONTEXT_INSTR_PTR(uc)       ((uc)->uc_mcontext.eip)
269 #  define VG_UCONTEXT_STACK_PTR(uc)       ((uc)->uc_mcontext.esp)
270 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                        \
271       /* Convert the value in uc_mcontext.eax into a SysRes. */ \
272       VG_(mk_SysRes_x86_linux)( (uc)->uc_mcontext.eax )
273 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)        \
274       { (srP)->r_pc = (ULong)((uc)->uc_mcontext.eip);    \
275         (srP)->r_sp = (ULong)((uc)->uc_mcontext.esp);    \
276         (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.ebp;   \
277       }
278 
279 #elif defined(VGP_amd64_linux)
280 #  define VG_UCONTEXT_INSTR_PTR(uc)       ((uc)->uc_mcontext.rip)
281 #  define VG_UCONTEXT_STACK_PTR(uc)       ((uc)->uc_mcontext.rsp)
282 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                        \
283       /* Convert the value in uc_mcontext.rax into a SysRes. */ \
284       VG_(mk_SysRes_amd64_linux)( (uc)->uc_mcontext.rax )
285 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)        \
286       { (srP)->r_pc = (uc)->uc_mcontext.rip;             \
287         (srP)->r_sp = (uc)->uc_mcontext.rsp;             \
288         (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.rbp; \
289       }
290 
291 #elif defined(VGP_ppc32_linux)
292 /* Comments from Paul Mackerras 25 Nov 05:
293 
294    > I'm tracking down a problem where V's signal handling doesn't
295    > work properly on a ppc440gx running 2.4.20.  The problem is that
296    > the ucontext being presented to V's sighandler seems completely
297    > bogus.
298 
299    > V's kernel headers and hence ucontext layout are derived from
300    > 2.6.9.  I compared include/asm-ppc/ucontext.h from 2.4.20 and
301    > 2.6.13.
302 
303    > Can I just check my interpretation: the 2.4.20 one contains the
304    > uc_mcontext field in line, whereas the 2.6.13 one has a pointer
305    > to said struct?  And so if V is using the 2.6.13 struct then a
306    > 2.4.20 one will make no sense to it.
307 
308    Not quite... what is inline in the 2.4.20 version is a
309    sigcontext_struct, not an mcontext.  The sigcontext looks like
310    this:
311 
312      struct sigcontext_struct {
313         unsigned long   _unused[4];
314         int             signal;
315         unsigned long   handler;
316         unsigned long   oldmask;
317         struct pt_regs  *regs;
318      };
319 
320    The regs pointer of that struct ends up at the same offset as the
321    uc_regs of the 2.6 struct ucontext, and a struct pt_regs is the
322    same as the mc_gregs field of the mcontext.  In fact the integer
323    regs are followed in memory by the floating point regs on 2.4.20.
324 
325    Thus if you are using the 2.6 definitions, it should work on 2.4.20
326    provided that you go via uc->uc_regs rather than looking in
327    uc->uc_mcontext directly.
328 
329    There is another subtlety: 2.4.20 doesn't save the vector regs when
330    delivering a signal, and 2.6.x only saves the vector regs if the
331    process has ever used an altivec instructions.  If 2.6.x does save
332    the vector regs, it sets the MSR_VEC bit in
333    uc->uc_regs->mc_gregs[PT_MSR], otherwise it clears it.  That bit
334    will always be clear under 2.4.20.  So you can use that bit to tell
335    whether uc->uc_regs->mc_vregs is valid. */
336 #  define VG_UCONTEXT_INSTR_PTR(uc)  ((uc)->uc_regs->mc_gregs[VKI_PT_NIP])
337 #  define VG_UCONTEXT_STACK_PTR(uc)  ((uc)->uc_regs->mc_gregs[VKI_PT_R1])
338 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                            \
339       /* Convert the values in uc_mcontext r3,cr into a SysRes. */  \
340       VG_(mk_SysRes_ppc32_linux)(                                   \
341          (uc)->uc_regs->mc_gregs[VKI_PT_R3],                        \
342          (((uc)->uc_regs->mc_gregs[VKI_PT_CCR] >> 28) & 1)          \
343       )
344 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)                     \
345       { (srP)->r_pc = (ULong)((uc)->uc_regs->mc_gregs[VKI_PT_NIP]);   \
346         (srP)->r_sp = (ULong)((uc)->uc_regs->mc_gregs[VKI_PT_R1]);    \
347         (srP)->misc.PPC32.r_lr = (uc)->uc_regs->mc_gregs[VKI_PT_LNK]; \
348       }
349 
350 #elif defined(VGP_ppc64be_linux) || defined(VGP_ppc64le_linux)
351 #  define VG_UCONTEXT_INSTR_PTR(uc)  ((uc)->uc_mcontext.gp_regs[VKI_PT_NIP])
352 #  define VG_UCONTEXT_STACK_PTR(uc)  ((uc)->uc_mcontext.gp_regs[VKI_PT_R1])
353    /* Dubious hack: if there is an error, only consider the lowest 8
354       bits of r3.  memcheck/tests/post-syscall shows a case where an
355       interrupted syscall should have produced a ucontext with 0x4
356       (VKI_EINTR) in r3 but is in fact producing 0x204. */
357    /* Awaiting clarification from PaulM.  Evidently 0x204 is
358       ERESTART_RESTARTBLOCK, which shouldn't have made it into user
359       space. */
VG_UCONTEXT_SYSCALL_SYSRES(struct vki_ucontext * uc)360    static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( struct vki_ucontext* uc )
361    {
362       ULong err = (uc->uc_mcontext.gp_regs[VKI_PT_CCR] >> 28) & 1;
363       ULong r3  = uc->uc_mcontext.gp_regs[VKI_PT_R3];
364       if (err) r3 &= 0xFF;
365       return VG_(mk_SysRes_ppc64_linux)( r3, err );
366    }
367 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)                       \
368       { (srP)->r_pc = (uc)->uc_mcontext.gp_regs[VKI_PT_NIP];            \
369         (srP)->r_sp = (uc)->uc_mcontext.gp_regs[VKI_PT_R1];             \
370         (srP)->misc.PPC64.r_lr = (uc)->uc_mcontext.gp_regs[VKI_PT_LNK]; \
371       }
372 
373 #elif defined(VGP_arm_linux)
374 #  define VG_UCONTEXT_INSTR_PTR(uc)       ((uc)->uc_mcontext.arm_pc)
375 #  define VG_UCONTEXT_STACK_PTR(uc)       ((uc)->uc_mcontext.arm_sp)
376 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                        \
377       /* Convert the value in uc_mcontext.rax into a SysRes. */ \
378       VG_(mk_SysRes_arm_linux)( (uc)->uc_mcontext.arm_r0 )
379 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)       \
380       { (srP)->r_pc = (uc)->uc_mcontext.arm_pc;         \
381         (srP)->r_sp = (uc)->uc_mcontext.arm_sp;         \
382         (srP)->misc.ARM.r14 = (uc)->uc_mcontext.arm_lr; \
383         (srP)->misc.ARM.r12 = (uc)->uc_mcontext.arm_ip; \
384         (srP)->misc.ARM.r11 = (uc)->uc_mcontext.arm_fp; \
385         (srP)->misc.ARM.r7  = (uc)->uc_mcontext.arm_r7; \
386       }
387 
388 #elif defined(VGP_arm64_linux)
389 #  define VG_UCONTEXT_INSTR_PTR(uc)       ((UWord)((uc)->uc_mcontext.pc))
390 #  define VG_UCONTEXT_STACK_PTR(uc)       ((UWord)((uc)->uc_mcontext.sp))
391 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                        \
392       /* Convert the value in uc_mcontext.regs[0] into a SysRes. */ \
393       VG_(mk_SysRes_arm64_linux)( (uc)->uc_mcontext.regs[0] )
394 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)           \
395       { (srP)->r_pc = (uc)->uc_mcontext.pc;                 \
396         (srP)->r_sp = (uc)->uc_mcontext.sp;                 \
397         (srP)->misc.ARM64.x29 = (uc)->uc_mcontext.regs[29]; \
398         (srP)->misc.ARM64.x30 = (uc)->uc_mcontext.regs[30]; \
399       }
400 
401 #elif defined(VGP_x86_darwin)
402 
VG_UCONTEXT_INSTR_PTR(void * ucV)403    static inline Addr VG_UCONTEXT_INSTR_PTR( void* ucV ) {
404       ucontext_t* uc = (ucontext_t*)ucV;
405       struct __darwin_mcontext32* mc = uc->uc_mcontext;
406       struct __darwin_i386_thread_state* ss = &mc->__ss;
407       return ss->__eip;
408    }
VG_UCONTEXT_STACK_PTR(void * ucV)409    static inline Addr VG_UCONTEXT_STACK_PTR( void* ucV ) {
410       ucontext_t* uc = (ucontext_t*)ucV;
411       struct __darwin_mcontext32* mc = uc->uc_mcontext;
412       struct __darwin_i386_thread_state* ss = &mc->__ss;
413       return ss->__esp;
414    }
VG_UCONTEXT_SYSCALL_SYSRES(void * ucV,UWord scclass)415    static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( void* ucV,
416                                                     UWord scclass ) {
417       /* this is complicated by the problem that there are 3 different
418          kinds of syscalls, each with its own return convention.
419          NB: scclass is a host word, hence UWord is good for both
420          amd64-darwin and x86-darwin */
421       ucontext_t* uc = (ucontext_t*)ucV;
422       struct __darwin_mcontext32* mc = uc->uc_mcontext;
423       struct __darwin_i386_thread_state* ss = &mc->__ss;
424       /* duplicates logic in m_syswrap.getSyscallStatusFromGuestState */
425       UInt carry = 1 & ss->__eflags;
426       UInt err = 0;
427       UInt wLO = 0;
428       UInt wHI = 0;
429       switch (scclass) {
430          case VG_DARWIN_SYSCALL_CLASS_UNIX:
431             err = carry;
432             wLO = ss->__eax;
433             wHI = ss->__edx;
434             break;
435          case VG_DARWIN_SYSCALL_CLASS_MACH:
436             wLO = ss->__eax;
437             break;
438          case VG_DARWIN_SYSCALL_CLASS_MDEP:
439             wLO = ss->__eax;
440             break;
441          default:
442             vg_assert(0);
443             break;
444       }
445       return VG_(mk_SysRes_x86_darwin)( scclass, err ? True : False,
446                                         wHI, wLO );
447    }
448    static inline
VG_UCONTEXT_TO_UnwindStartRegs(UnwindStartRegs * srP,void * ucV)449    void VG_UCONTEXT_TO_UnwindStartRegs( UnwindStartRegs* srP,
450                                         void* ucV ) {
451       ucontext_t* uc = (ucontext_t*)(ucV);
452       struct __darwin_mcontext32* mc = uc->uc_mcontext;
453       struct __darwin_i386_thread_state* ss = &mc->__ss;
454       srP->r_pc = (ULong)(ss->__eip);
455       srP->r_sp = (ULong)(ss->__esp);
456       srP->misc.X86.r_ebp = (UInt)(ss->__ebp);
457    }
458 
459 #elif defined(VGP_amd64_darwin)
460 
VG_UCONTEXT_INSTR_PTR(void * ucV)461    static inline Addr VG_UCONTEXT_INSTR_PTR( void* ucV ) {
462       ucontext_t* uc = (ucontext_t*)ucV;
463       struct __darwin_mcontext64* mc = uc->uc_mcontext;
464       struct __darwin_x86_thread_state64* ss = &mc->__ss;
465       return ss->__rip;
466    }
VG_UCONTEXT_STACK_PTR(void * ucV)467    static inline Addr VG_UCONTEXT_STACK_PTR( void* ucV ) {
468       ucontext_t* uc = (ucontext_t*)ucV;
469       struct __darwin_mcontext64* mc = uc->uc_mcontext;
470       struct __darwin_x86_thread_state64* ss = &mc->__ss;
471       return ss->__rsp;
472    }
VG_UCONTEXT_SYSCALL_SYSRES(void * ucV,UWord scclass)473    static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( void* ucV,
474                                                     UWord scclass ) {
475       /* This is copied from the x86-darwin case.  I'm not sure if it
476 	 is correct. */
477       ucontext_t* uc = (ucontext_t*)ucV;
478       struct __darwin_mcontext64* mc = uc->uc_mcontext;
479       struct __darwin_x86_thread_state64* ss = &mc->__ss;
480       /* duplicates logic in m_syswrap.getSyscallStatusFromGuestState */
481       ULong carry = 1 & ss->__rflags;
482       ULong err = 0;
483       ULong wLO = 0;
484       ULong wHI = 0;
485       switch (scclass) {
486          case VG_DARWIN_SYSCALL_CLASS_UNIX:
487             err = carry;
488             wLO = ss->__rax;
489             wHI = ss->__rdx;
490             break;
491          case VG_DARWIN_SYSCALL_CLASS_MACH:
492             wLO = ss->__rax;
493             break;
494          case VG_DARWIN_SYSCALL_CLASS_MDEP:
495             wLO = ss->__rax;
496             break;
497          default:
498             vg_assert(0);
499             break;
500       }
501       return VG_(mk_SysRes_amd64_darwin)( scclass, err ? True : False,
502 					  wHI, wLO );
503    }
504    static inline
VG_UCONTEXT_TO_UnwindStartRegs(UnwindStartRegs * srP,void * ucV)505    void VG_UCONTEXT_TO_UnwindStartRegs( UnwindStartRegs* srP,
506                                         void* ucV ) {
507       ucontext_t* uc = (ucontext_t*)ucV;
508       struct __darwin_mcontext64* mc = uc->uc_mcontext;
509       struct __darwin_x86_thread_state64* ss = &mc->__ss;
510       srP->r_pc = (ULong)(ss->__rip);
511       srP->r_sp = (ULong)(ss->__rsp);
512       srP->misc.AMD64.r_rbp = (ULong)(ss->__rbp);
513    }
514 
515 #elif defined(VGP_s390x_linux)
516 
517 #  define VG_UCONTEXT_INSTR_PTR(uc)       ((uc)->uc_mcontext.regs.psw.addr)
518 #  define VG_UCONTEXT_STACK_PTR(uc)       ((uc)->uc_mcontext.regs.gprs[15])
519 #  define VG_UCONTEXT_FRAME_PTR(uc)       ((uc)->uc_mcontext.regs.gprs[11])
520 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                        \
521       VG_(mk_SysRes_s390x_linux)((uc)->uc_mcontext.regs.gprs[2])
522 #  define VG_UCONTEXT_LINK_REG(uc) ((uc)->uc_mcontext.regs.gprs[14])
523 
524 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)        \
525       { (srP)->r_pc = (ULong)((uc)->uc_mcontext.regs.psw.addr);    \
526         (srP)->r_sp = (ULong)((uc)->uc_mcontext.regs.gprs[15]);    \
527         (srP)->misc.S390X.r_fp = (uc)->uc_mcontext.regs.gprs[11];  \
528         (srP)->misc.S390X.r_lr = (uc)->uc_mcontext.regs.gprs[14];  \
529       }
530 
531 #elif defined(VGP_mips32_linux)
532 #  define VG_UCONTEXT_INSTR_PTR(uc)   ((UWord)(((uc)->uc_mcontext.sc_pc)))
533 #  define VG_UCONTEXT_STACK_PTR(uc)   ((UWord)((uc)->uc_mcontext.sc_regs[29]))
534 #  define VG_UCONTEXT_FRAME_PTR(uc)       ((uc)->uc_mcontext.sc_regs[30])
535 #  define VG_UCONTEXT_SYSCALL_NUM(uc)     ((uc)->uc_mcontext.sc_regs[2])
536 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                         \
537       /* Convert the value in uc_mcontext.rax into a SysRes. */  \
538       VG_(mk_SysRes_mips32_linux)( (uc)->uc_mcontext.sc_regs[2], \
539                                    (uc)->uc_mcontext.sc_regs[3], \
540                                    (uc)->uc_mcontext.sc_regs[7])
541 
542 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)              \
543       { (srP)->r_pc = (uc)->uc_mcontext.sc_pc;                 \
544         (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29];           \
545         (srP)->misc.MIPS32.r30 = (uc)->uc_mcontext.sc_regs[30]; \
546         (srP)->misc.MIPS32.r31 = (uc)->uc_mcontext.sc_regs[31]; \
547         (srP)->misc.MIPS32.r28 = (uc)->uc_mcontext.sc_regs[28]; \
548       }
549 
550 #elif defined(VGP_mips64_linux)
551 #  define VG_UCONTEXT_INSTR_PTR(uc)       (((uc)->uc_mcontext.sc_pc))
552 #  define VG_UCONTEXT_STACK_PTR(uc)       ((uc)->uc_mcontext.sc_regs[29])
553 #  define VG_UCONTEXT_FRAME_PTR(uc)       ((uc)->uc_mcontext.sc_regs[30])
554 #  define VG_UCONTEXT_SYSCALL_NUM(uc)     ((uc)->uc_mcontext.sc_regs[2])
555 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                        \
556       /* Convert the value in uc_mcontext.rax into a SysRes. */ \
557       VG_(mk_SysRes_mips64_linux)((uc)->uc_mcontext.sc_regs[2], \
558                                   (uc)->uc_mcontext.sc_regs[3], \
559                                   (uc)->uc_mcontext.sc_regs[7])
560 
561 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)               \
562       { (srP)->r_pc = (uc)->uc_mcontext.sc_pc;                  \
563         (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29];            \
564         (srP)->misc.MIPS64.r30 = (uc)->uc_mcontext.sc_regs[30]; \
565         (srP)->misc.MIPS64.r31 = (uc)->uc_mcontext.sc_regs[31]; \
566         (srP)->misc.MIPS64.r28 = (uc)->uc_mcontext.sc_regs[28]; \
567       }
568 
569 #elif defined(VGP_x86_solaris)
570 #  define VG_UCONTEXT_INSTR_PTR(uc)       ((Addr)(uc)->uc_mcontext.gregs[VKI_EIP])
571 #  define VG_UCONTEXT_STACK_PTR(uc)       ((Addr)(uc)->uc_mcontext.gregs[VKI_UESP])
572 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                               \
573       VG_(mk_SysRes_x86_solaris)((uc)->uc_mcontext.gregs[VKI_EFL] & 1, \
574                                  (uc)->uc_mcontext.gregs[VKI_EAX],     \
575                                  (uc)->uc_mcontext.gregs[VKI_EFL] & 1  \
576                                  ? 0 : (uc)->uc_mcontext.gregs[VKI_EDX])
577 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)                      \
578       { (srP)->r_pc = (ULong)(uc)->uc_mcontext.gregs[VKI_EIP];         \
579         (srP)->r_sp = (ULong)(uc)->uc_mcontext.gregs[VKI_UESP];        \
580         (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.gregs[VKI_EBP];      \
581       }
582 
583 #elif defined(VGP_amd64_solaris)
584 #  define VG_UCONTEXT_INSTR_PTR(uc)       ((Addr)(uc)->uc_mcontext.gregs[VKI_REG_RIP])
585 #  define VG_UCONTEXT_STACK_PTR(uc)       ((Addr)(uc)->uc_mcontext.gregs[VKI_REG_RSP])
586 #  define VG_UCONTEXT_SYSCALL_SYSRES(uc)                                     \
587       VG_(mk_SysRes_amd64_solaris)((uc)->uc_mcontext.gregs[VKI_REG_RFL] & 1, \
588                                    (uc)->uc_mcontext.gregs[VKI_REG_RAX],     \
589                                    (uc)->uc_mcontext.gregs[VKI_REG_RFL] & 1  \
590                                    ? 0 : (uc)->uc_mcontext.gregs[VKI_REG_RDX])
591 #  define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc)                            \
592       { (srP)->r_pc = (uc)->uc_mcontext.gregs[VKI_REG_RIP];                  \
593         (srP)->r_sp = (uc)->uc_mcontext.gregs[VKI_REG_RSP];                  \
594         (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.gregs[VKI_REG_RBP];      \
595       }
596 #else
597 #  error Unknown platform
598 #endif
599 
600 
601 /* ------ Macros for pulling stuff out of siginfos ------ */
602 
603 /* These macros allow use of uniform names when working with
604    both the Linux and Darwin vki definitions. */
605 #if defined(VGO_linux)
606 #  define VKI_SIGINFO_si_addr  _sifields._sigfault._addr
607 #  define VKI_SIGINFO_si_pid   _sifields._kill._pid
608 #elif defined(VGO_darwin) || defined(VGO_solaris)
609 #  define VKI_SIGINFO_si_addr  si_addr
610 #  define VKI_SIGINFO_si_pid   si_pid
611 #else
612 #  error Unknown OS
613 #endif
614 
615 
616 /* ---------------------------------------------------------------------
617    HIGH LEVEL STUFF TO DO WITH SIGNALS: POLICY (MOSTLY)
618    ------------------------------------------------------------------ */
619 
620 /* ---------------------------------------------------------------------
621    Signal state for this process.
622    ------------------------------------------------------------------ */
623 
624 
625 /* Base-ment of these arrays[_VKI_NSIG].
626 
627    Valid signal numbers are 1 .. _VKI_NSIG inclusive.
628    Rather than subtracting 1 for indexing these arrays, which
629    is tedious and error-prone, they are simply dimensioned 1 larger,
630    and entry [0] is not used.
631  */
632 
633 
634 /* -----------------------------------------------------
635    Static client signal state (SCSS).  This is the state
636    that the client thinks it has the kernel in.
637    SCSS records verbatim the client's settings.  These
638    are mashed around only when SKSS is calculated from it.
639    -------------------------------------------------- */
640 
641 typedef
642    struct {
643       void* scss_handler;  /* VKI_SIG_DFL or VKI_SIG_IGN or ptr to
644                               client's handler */
645       UInt  scss_flags;
646       vki_sigset_t scss_mask;
647       void* scss_restorer; /* where sigreturn goes */
648       void* scss_sa_tramp; /* sa_tramp setting, Darwin only */
649       /* re _restorer and _sa_tramp, we merely record the values
650          supplied when the client does 'sigaction' and give them back
651          when requested.  Otherwise they are simply ignored. */
652    }
653    SCSS_Per_Signal;
654 
655 typedef
656    struct {
657       /* per-signal info */
658       SCSS_Per_Signal scss_per_sig[1+_VKI_NSIG];
659 
660       /* Additional elements to SCSS not stored here:
661          - for each thread, the thread's blocking mask
662          - for each thread in WaitSIG, the set of waited-on sigs
663       */
664       }
665       SCSS;
666 
667 static SCSS scss;
668 
669 
670 /* -----------------------------------------------------
671    Static kernel signal state (SKSS).  This is the state
672    that we have the kernel in.  It is computed from SCSS.
673    -------------------------------------------------- */
674 
675 /* Let's do:
676      sigprocmask assigns to all thread masks
677      so that at least everything is always consistent
678    Flags:
679      SA_SIGINFO -- we always set it, and honour it for the client
680      SA_NOCLDSTOP -- passed to kernel
681      SA_ONESHOT or SA_RESETHAND -- pass through
682      SA_RESTART -- we observe this but set our handlers to always restart
683                    (this doesn't apply to the Solaris port)
684      SA_NOMASK or SA_NODEFER -- we observe this, but our handlers block everything
685      SA_ONSTACK -- pass through
686      SA_NOCLDWAIT -- pass through
687 */
688 
689 
690 typedef
691    struct {
692       void* skss_handler;  /* VKI_SIG_DFL or VKI_SIG_IGN
693                               or ptr to our handler */
694       UInt skss_flags;
695       /* There is no skss_mask, since we know that we will always ask
696          for all signals to be blocked in our sighandlers. */
697       /* Also there is no skss_restorer. */
698    }
699    SKSS_Per_Signal;
700 
701 typedef
702    struct {
703       SKSS_Per_Signal skss_per_sig[1+_VKI_NSIG];
704    }
705    SKSS;
706 
707 static SKSS skss;
708 
709 /* returns True if signal is to be ignored.
710    To check this, possibly call gdbserver with tid. */
is_sig_ign(vki_siginfo_t * info,ThreadId tid)711 static Bool is_sig_ign(vki_siginfo_t *info, ThreadId tid)
712 {
713    vg_assert(info->si_signo >= 1 && info->si_signo <= _VKI_NSIG);
714 
715    /* If VG_(gdbserver_report_signal) tells to report the signal,
716       then verify if this signal is not to be ignored. GDB might have
717       modified si_signo, so we check after the call to gdbserver. */
718    return !VG_(gdbserver_report_signal) (info, tid)
719       || scss.scss_per_sig[info->si_signo].scss_handler == VKI_SIG_IGN;
720 }
721 
722 /* ---------------------------------------------------------------------
723    Compute the SKSS required by the current SCSS.
724    ------------------------------------------------------------------ */
725 
726 static
pp_SKSS(void)727 void pp_SKSS ( void )
728 {
729    Int sig;
730    VG_(printf)("\n\nSKSS:\n");
731    for (sig = 1; sig <= _VKI_NSIG; sig++) {
732       VG_(printf)("sig %d:  handler %p,  flags 0x%x\n", sig,
733                   skss.skss_per_sig[sig].skss_handler,
734                   skss.skss_per_sig[sig].skss_flags );
735 
736    }
737 }
738 
739 /* This is the core, clever bit.  Computation is as follows:
740 
741    For each signal
742       handler = if client has a handler, then our handler
743                 else if client is DFL, then our handler as well
744                 else (client must be IGN)
745 			then hander is IGN
746 */
747 static
calculate_SKSS_from_SCSS(SKSS * dst)748 void calculate_SKSS_from_SCSS ( SKSS* dst )
749 {
750    Int   sig;
751    UInt  scss_flags;
752    UInt  skss_flags;
753 
754    for (sig = 1; sig <= _VKI_NSIG; sig++) {
755       void *skss_handler;
756       void *scss_handler;
757 
758       scss_handler = scss.scss_per_sig[sig].scss_handler;
759       scss_flags   = scss.scss_per_sig[sig].scss_flags;
760 
761       switch(sig) {
762       case VKI_SIGSEGV:
763       case VKI_SIGBUS:
764       case VKI_SIGFPE:
765       case VKI_SIGILL:
766       case VKI_SIGTRAP:
767 	 /* For these, we always want to catch them and report, even
768 	    if the client code doesn't. */
769 	 skss_handler = sync_signalhandler;
770 	 break;
771 
772       case VKI_SIGCONT:
773 	 /* Let the kernel handle SIGCONT unless the client is actually
774 	    catching it. */
775       case VKI_SIGCHLD:
776       case VKI_SIGWINCH:
777       case VKI_SIGURG:
778          /* For signals which are have a default action of Ignore,
779             only set a handler if the client has set a signal handler.
780             Otherwise the kernel will interrupt a syscall which
781             wouldn't have otherwise been interrupted. */
782 	 if (scss.scss_per_sig[sig].scss_handler == VKI_SIG_DFL)
783 	    skss_handler = VKI_SIG_DFL;
784 	 else if (scss.scss_per_sig[sig].scss_handler == VKI_SIG_IGN)
785 	    skss_handler = VKI_SIG_IGN;
786 	 else
787 	    skss_handler = async_signalhandler;
788 	 break;
789 
790       default:
791          // VKI_SIGVG* are runtime variables, so we can't make them
792          // cases in the switch, so we handle them in the 'default' case.
793 	 if (sig == VG_SIGVGKILL)
794 	    skss_handler = sigvgkill_handler;
795 	 else {
796 	    if (scss_handler == VKI_SIG_IGN)
797 	       skss_handler = VKI_SIG_IGN;
798 	    else
799 	       skss_handler = async_signalhandler;
800 	 }
801 	 break;
802       }
803 
804       /* Flags */
805 
806       skss_flags = 0;
807 
808       /* SA_NOCLDSTOP, SA_NOCLDWAIT: pass to kernel */
809       skss_flags |= scss_flags & (VKI_SA_NOCLDSTOP | VKI_SA_NOCLDWAIT);
810 
811       /* SA_ONESHOT: ignore client setting */
812 
813 #     if !defined(VGO_solaris)
814       /* SA_RESTART: ignore client setting and always set it for us.
815 	 Though we never rely on the kernel to restart a
816 	 syscall, we observe whether it wanted to restart the syscall
817 	 or not, which is needed by
818          VG_(fixup_guest_state_after_syscall_interrupted) */
819       skss_flags |= VKI_SA_RESTART;
820 #else
821       /* The above does not apply to the Solaris port, where the kernel does
822          not directly restart syscalls, but instead it checks SA_RESTART flag
823          and if it is set then it returns ERESTART to libc and the library
824          actually restarts the syscall. */
825       skss_flags |= scss_flags & VKI_SA_RESTART;
826 #     endif
827 
828       /* SA_NOMASK: ignore it */
829 
830       /* SA_ONSTACK: client setting is irrelevant here */
831       /* We don't set a signal stack, so ignore */
832 
833       /* always ask for SA_SIGINFO */
834       skss_flags |= VKI_SA_SIGINFO;
835 
836       /* use our own restorer */
837       skss_flags |= VKI_SA_RESTORER;
838 
839       /* Create SKSS entry for this signal. */
840       if (sig != VKI_SIGKILL && sig != VKI_SIGSTOP)
841          dst->skss_per_sig[sig].skss_handler = skss_handler;
842       else
843          dst->skss_per_sig[sig].skss_handler = VKI_SIG_DFL;
844 
845       dst->skss_per_sig[sig].skss_flags   = skss_flags;
846    }
847 
848    /* Sanity checks. */
849    vg_assert(dst->skss_per_sig[VKI_SIGKILL].skss_handler == VKI_SIG_DFL);
850    vg_assert(dst->skss_per_sig[VKI_SIGSTOP].skss_handler == VKI_SIG_DFL);
851 
852    if (0)
853       pp_SKSS();
854 }
855 
856 
857 /* ---------------------------------------------------------------------
858    After a possible SCSS change, update SKSS and the kernel itself.
859    ------------------------------------------------------------------ */
860 
861 // We need two levels of macro-expansion here to convert __NR_rt_sigreturn
862 // to a number before converting it to a string... sigh.
863 extern void my_sigreturn(void);
864 
865 #if defined(VGP_x86_linux)
866 #  define _MY_SIGRETURN(name) \
867    ".text\n" \
868    ".globl my_sigreturn\n" \
869    "my_sigreturn:\n" \
870    "	movl	$" #name ", %eax\n" \
871    "	int	$0x80\n" \
872    ".previous\n"
873 
874 #elif defined(VGP_amd64_linux)
875 #  define _MY_SIGRETURN(name) \
876    ".text\n" \
877    ".globl my_sigreturn\n" \
878    "my_sigreturn:\n" \
879    "	movq	$" #name ", %rax\n" \
880    "	syscall\n" \
881    ".previous\n"
882 
883 #elif defined(VGP_ppc32_linux)
884 #  define _MY_SIGRETURN(name) \
885    ".text\n" \
886    ".globl my_sigreturn\n" \
887    "my_sigreturn:\n" \
888    "	li	0, " #name "\n" \
889    "	sc\n" \
890    ".previous\n"
891 
892 #elif defined(VGP_ppc64be_linux)
893 #  define _MY_SIGRETURN(name) \
894    ".align   2\n" \
895    ".globl   my_sigreturn\n" \
896    ".section \".opd\",\"aw\"\n" \
897    ".align   3\n" \
898    "my_sigreturn:\n" \
899    ".quad    .my_sigreturn,.TOC.@tocbase,0\n" \
900    ".previous\n" \
901    ".type    .my_sigreturn,@function\n" \
902    ".globl   .my_sigreturn\n" \
903    ".my_sigreturn:\n" \
904    "	li	0, " #name "\n" \
905    "	sc\n"
906 
907 #elif defined(VGP_ppc64le_linux)
908 /* Little Endian supports ELF version 2.  In the future, it may
909  * support other versions.
910  */
911 #  define _MY_SIGRETURN(name) \
912    ".align   2\n" \
913    ".globl   my_sigreturn\n" \
914    ".type    .my_sigreturn,@function\n" \
915    "my_sigreturn:\n" \
916    "#if _CALL_ELF == 2 \n" \
917    "0: addis        2,12,.TOC.-0b@ha\n" \
918    "   addi         2,2,.TOC.-0b@l\n" \
919    "   .localentry my_sigreturn,.-my_sigreturn\n" \
920    "#endif \n" \
921    "   sc\n" \
922    "   .size my_sigreturn,.-my_sigreturn\n"
923 
924 #elif defined(VGP_arm_linux)
925 #  define _MY_SIGRETURN(name) \
926    ".text\n" \
927    ".globl my_sigreturn\n" \
928    "my_sigreturn:\n\t" \
929    "    mov  r7, #" #name "\n\t" \
930    "    svc  0x00000000\n" \
931    ".previous\n"
932 
933 #elif defined(VGP_arm64_linux)
934 #  define _MY_SIGRETURN(name) \
935    ".text\n" \
936    ".globl my_sigreturn\n" \
937    "my_sigreturn:\n\t" \
938    "    mov  x8, #" #name "\n\t" \
939    "    svc  0x0\n" \
940    ".previous\n"
941 
942 #elif defined(VGP_x86_darwin)
943 #  define _MY_SIGRETURN(name) \
944    ".text\n" \
945    ".globl my_sigreturn\n" \
946    "my_sigreturn:\n" \
947    "    movl $" VG_STRINGIFY(__NR_DARWIN_FAKE_SIGRETURN) ",%eax\n" \
948    "    int $0x80\n"
949 
950 #elif defined(VGP_amd64_darwin)
951 #  define _MY_SIGRETURN(name) \
952    ".text\n" \
953    ".globl my_sigreturn\n" \
954    "my_sigreturn:\n" \
955    "    movq $" VG_STRINGIFY(__NR_DARWIN_FAKE_SIGRETURN) ",%rax\n" \
956    "    syscall\n"
957 
958 #elif defined(VGP_s390x_linux)
959 #  define _MY_SIGRETURN(name) \
960    ".text\n" \
961    ".globl my_sigreturn\n" \
962    "my_sigreturn:\n" \
963    " svc " #name "\n" \
964    ".previous\n"
965 
966 #elif defined(VGP_mips32_linux)
967 #  define _MY_SIGRETURN(name) \
968    ".text\n" \
969    "my_sigreturn:\n" \
970    "	li	$2, " #name "\n" /* apparently $2 is v0 */ \
971    "	syscall\n" \
972    ".previous\n"
973 
974 #elif defined(VGP_mips64_linux)
975 #  define _MY_SIGRETURN(name) \
976    ".text\n" \
977    "my_sigreturn:\n" \
978    "   li $2, " #name "\n" \
979    "   syscall\n" \
980    ".previous\n"
981 
982 #elif defined(VGP_x86_solaris) || defined(VGP_amd64_solaris)
983 /* Not used on Solaris. */
984 #  define _MY_SIGRETURN(name) \
985    ".text\n" \
986    ".globl my_sigreturn\n" \
987    "my_sigreturn:\n" \
988    "ud2\n" \
989    ".previous\n"
990 
991 #else
992 #  error Unknown platform
993 #endif
994 
995 #define MY_SIGRETURN(name)  _MY_SIGRETURN(name)
996 asm(
997    MY_SIGRETURN(__NR_rt_sigreturn)
998 );
999 
1000 
handle_SCSS_change(Bool force_update)1001 static void handle_SCSS_change ( Bool force_update )
1002 {
1003    Int  res, sig;
1004    SKSS skss_old;
1005    vki_sigaction_toK_t   ksa;
1006    vki_sigaction_fromK_t ksa_old;
1007 
1008    /* Remember old SKSS and calculate new one. */
1009    skss_old = skss;
1010    calculate_SKSS_from_SCSS ( &skss );
1011 
1012    /* Compare the new SKSS entries vs the old ones, and update kernel
1013       where they differ. */
1014    for (sig = 1; sig <= VG_(max_signal); sig++) {
1015 
1016       /* Trying to do anything with SIGKILL is pointless; just ignore
1017          it. */
1018       if (sig == VKI_SIGKILL || sig == VKI_SIGSTOP)
1019          continue;
1020 
1021       if (!force_update) {
1022          if ((skss_old.skss_per_sig[sig].skss_handler
1023               == skss.skss_per_sig[sig].skss_handler)
1024              && (skss_old.skss_per_sig[sig].skss_flags
1025                  == skss.skss_per_sig[sig].skss_flags))
1026             /* no difference */
1027             continue;
1028       }
1029 
1030       ksa.ksa_handler = skss.skss_per_sig[sig].skss_handler;
1031       ksa.sa_flags    = skss.skss_per_sig[sig].skss_flags;
1032 #     if !defined(VGP_ppc32_linux) && \
1033          !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1034          !defined(VGP_mips32_linux) && !defined(VGO_solaris)
1035       ksa.sa_restorer = my_sigreturn;
1036 #     endif
1037       /* Re above ifdef (also the assertion below), PaulM says:
1038          The sa_restorer field is not used at all on ppc.  Glibc
1039          converts the sigaction you give it into a kernel sigaction,
1040          but it doesn't put anything in the sa_restorer field.
1041       */
1042 
1043       /* block all signals in handler */
1044       VG_(sigfillset)( &ksa.sa_mask );
1045       VG_(sigdelset)( &ksa.sa_mask, VKI_SIGKILL );
1046       VG_(sigdelset)( &ksa.sa_mask, VKI_SIGSTOP );
1047 
1048       if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
1049          VG_(dmsg)("setting ksig %d to: hdlr %p, flags 0x%lx, "
1050                    "mask(msb..lsb) 0x%llx 0x%llx\n",
1051                    sig, ksa.ksa_handler,
1052                    (UWord)ksa.sa_flags,
1053                    _VKI_NSIG_WORDS > 1 ? (ULong)ksa.sa_mask.sig[1] : 0,
1054                    (ULong)ksa.sa_mask.sig[0]);
1055 
1056       res = VG_(sigaction)( sig, &ksa, &ksa_old );
1057       vg_assert(res == 0);
1058 
1059       /* Since we got the old sigaction more or less for free, might
1060          as well extract the maximum sanity-check value from it. */
1061       if (!force_update) {
1062          vg_assert(ksa_old.ksa_handler
1063                    == skss_old.skss_per_sig[sig].skss_handler);
1064 #        if defined(VGO_solaris)
1065          if (ksa_old.ksa_handler == VKI_SIG_DFL
1066                || ksa_old.ksa_handler == VKI_SIG_IGN) {
1067             /* The Solaris kernel ignores signal flags (except SA_NOCLDWAIT
1068                and SA_NOCLDSTOP) and a signal mask if a handler is set to
1069                SIG_DFL or SIG_IGN. */
1070             skss_old.skss_per_sig[sig].skss_flags
1071                &= (VKI_SA_NOCLDWAIT | VKI_SA_NOCLDSTOP);
1072             vg_assert(VG_(isemptysigset)( &ksa_old.sa_mask ));
1073             VG_(sigfillset)( &ksa_old.sa_mask );
1074          }
1075 #        endif
1076          vg_assert(ksa_old.sa_flags
1077                    == skss_old.skss_per_sig[sig].skss_flags);
1078 #        if !defined(VGP_ppc32_linux) && \
1079             !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1080             !defined(VGP_mips32_linux) && !defined(VGP_mips64_linux) && \
1081             !defined(VGO_solaris)
1082          vg_assert(ksa_old.sa_restorer == my_sigreturn);
1083 #        endif
1084          VG_(sigaddset)( &ksa_old.sa_mask, VKI_SIGKILL );
1085          VG_(sigaddset)( &ksa_old.sa_mask, VKI_SIGSTOP );
1086          vg_assert(VG_(isfullsigset)( &ksa_old.sa_mask ));
1087       }
1088    }
1089 }
1090 
1091 
1092 /* ---------------------------------------------------------------------
1093    Update/query SCSS in accordance with client requests.
1094    ------------------------------------------------------------------ */
1095 
1096 /* Logic for this alt-stack stuff copied directly from do_sigaltstack
1097    in kernel/signal.[ch] */
1098 
1099 /* True if we are on the alternate signal stack.  */
on_sig_stack(ThreadId tid,Addr m_SP)1100 static Bool on_sig_stack ( ThreadId tid, Addr m_SP )
1101 {
1102    ThreadState *tst = VG_(get_ThreadState)(tid);
1103 
1104    return (m_SP - (Addr)tst->altstack.ss_sp < (Addr)tst->altstack.ss_size);
1105 }
1106 
sas_ss_flags(ThreadId tid,Addr m_SP)1107 static Int sas_ss_flags ( ThreadId tid, Addr m_SP )
1108 {
1109    ThreadState *tst = VG_(get_ThreadState)(tid);
1110 
1111    return (tst->altstack.ss_size == 0
1112               ? VKI_SS_DISABLE
1113               : on_sig_stack(tid, m_SP) ? VKI_SS_ONSTACK : 0);
1114 }
1115 
1116 
VG_(do_sys_sigaltstack)1117 SysRes VG_(do_sys_sigaltstack) ( ThreadId tid, vki_stack_t* ss, vki_stack_t* oss )
1118 {
1119    Addr m_SP;
1120 
1121    vg_assert(VG_(is_valid_tid)(tid));
1122    m_SP  = VG_(get_SP)(tid);
1123 
1124    if (VG_(clo_trace_signals))
1125       VG_(dmsg)("sys_sigaltstack: tid %u, "
1126                 "ss %p{%p,sz=%llu,flags=0x%llx}, oss %p (current SP %p)\n",
1127                 tid, (void*)ss,
1128                 ss ? ss->ss_sp : 0,
1129                 (ULong)(ss ? ss->ss_size : 0),
1130                 (ULong)(ss ? ss->ss_flags : 0),
1131                 (void*)oss, (void*)m_SP);
1132 
1133    if (oss != NULL) {
1134       oss->ss_sp    = VG_(threads)[tid].altstack.ss_sp;
1135       oss->ss_size  = VG_(threads)[tid].altstack.ss_size;
1136       oss->ss_flags = VG_(threads)[tid].altstack.ss_flags
1137                       | sas_ss_flags(tid, m_SP);
1138    }
1139 
1140    if (ss != NULL) {
1141       if (on_sig_stack(tid, VG_(get_SP)(tid))) {
1142          return VG_(mk_SysRes_Error)( VKI_EPERM );
1143       }
1144       if (ss->ss_flags != VKI_SS_DISABLE
1145           && ss->ss_flags != VKI_SS_ONSTACK
1146           && ss->ss_flags != 0) {
1147          return VG_(mk_SysRes_Error)( VKI_EINVAL );
1148       }
1149       if (ss->ss_flags == VKI_SS_DISABLE) {
1150          VG_(threads)[tid].altstack.ss_flags = VKI_SS_DISABLE;
1151       } else {
1152          if (ss->ss_size < VKI_MINSIGSTKSZ) {
1153             return VG_(mk_SysRes_Error)( VKI_ENOMEM );
1154          }
1155 
1156 	 VG_(threads)[tid].altstack.ss_sp    = ss->ss_sp;
1157 	 VG_(threads)[tid].altstack.ss_size  = ss->ss_size;
1158 	 VG_(threads)[tid].altstack.ss_flags = 0;
1159       }
1160    }
1161    return VG_(mk_SysRes_Success)( 0 );
1162 }
1163 
1164 
VG_(do_sys_sigaction)1165 SysRes VG_(do_sys_sigaction) ( Int signo,
1166                                const vki_sigaction_toK_t* new_act,
1167                                vki_sigaction_fromK_t* old_act )
1168 {
1169    if (VG_(clo_trace_signals))
1170       VG_(dmsg)("sys_sigaction: sigNo %d, "
1171                 "new %#lx, old %#lx, new flags 0x%llx\n",
1172                 signo, (UWord)new_act, (UWord)old_act,
1173                 (ULong)(new_act ? new_act->sa_flags : 0));
1174 
1175    /* Rule out various error conditions.  The aim is to ensure that if
1176       when the call is passed to the kernel it will definitely
1177       succeed. */
1178 
1179    /* Reject out-of-range signal numbers. */
1180    if (signo < 1 || signo > VG_(max_signal)) goto bad_signo;
1181 
1182    /* don't let them use our signals */
1183    if ( (signo > VG_SIGVGRTUSERMAX)
1184 	&& new_act
1185 	&& !(new_act->ksa_handler == VKI_SIG_DFL
1186              || new_act->ksa_handler == VKI_SIG_IGN) )
1187       goto bad_signo_reserved;
1188 
1189    /* Reject attempts to set a handler (or set ignore) for SIGKILL. */
1190    if ( (signo == VKI_SIGKILL || signo == VKI_SIGSTOP)
1191        && new_act
1192        && new_act->ksa_handler != VKI_SIG_DFL)
1193       goto bad_sigkill_or_sigstop;
1194 
1195    /* If the client supplied non-NULL old_act, copy the relevant SCSS
1196       entry into it. */
1197    if (old_act) {
1198       old_act->ksa_handler = scss.scss_per_sig[signo].scss_handler;
1199       old_act->sa_flags    = scss.scss_per_sig[signo].scss_flags;
1200       old_act->sa_mask     = scss.scss_per_sig[signo].scss_mask;
1201 #     if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1202          !defined(VGO_solaris)
1203       old_act->sa_restorer = scss.scss_per_sig[signo].scss_restorer;
1204 #     endif
1205    }
1206 
1207    /* And now copy new SCSS entry from new_act. */
1208    if (new_act) {
1209       scss.scss_per_sig[signo].scss_handler  = new_act->ksa_handler;
1210       scss.scss_per_sig[signo].scss_flags    = new_act->sa_flags;
1211       scss.scss_per_sig[signo].scss_mask     = new_act->sa_mask;
1212 
1213       scss.scss_per_sig[signo].scss_restorer = NULL;
1214 #     if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1215          !defined(VGO_solaris)
1216       scss.scss_per_sig[signo].scss_restorer = new_act->sa_restorer;
1217 #     endif
1218 
1219       scss.scss_per_sig[signo].scss_sa_tramp = NULL;
1220 #     if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
1221       scss.scss_per_sig[signo].scss_sa_tramp = new_act->sa_tramp;
1222 #     endif
1223 
1224       VG_(sigdelset)(&scss.scss_per_sig[signo].scss_mask, VKI_SIGKILL);
1225       VG_(sigdelset)(&scss.scss_per_sig[signo].scss_mask, VKI_SIGSTOP);
1226    }
1227 
1228    /* All happy bunnies ... */
1229    if (new_act) {
1230       handle_SCSS_change( False /* lazy update */ );
1231    }
1232    return VG_(mk_SysRes_Success)( 0 );
1233 
1234   bad_signo:
1235    if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1236       VG_(umsg)("Warning: bad signal number %d in sigaction()\n", signo);
1237    }
1238    return VG_(mk_SysRes_Error)( VKI_EINVAL );
1239 
1240   bad_signo_reserved:
1241    if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1242       VG_(umsg)("Warning: ignored attempt to set %s handler in sigaction();\n",
1243                 VG_(signame)(signo));
1244       VG_(umsg)("         the %s signal is used internally by Valgrind\n",
1245                 VG_(signame)(signo));
1246    }
1247    return VG_(mk_SysRes_Error)( VKI_EINVAL );
1248 
1249   bad_sigkill_or_sigstop:
1250    if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1251       VG_(umsg)("Warning: ignored attempt to set %s handler in sigaction();\n",
1252                 VG_(signame)(signo));
1253       VG_(umsg)("         the %s signal is uncatchable\n",
1254                 VG_(signame)(signo));
1255    }
1256    return VG_(mk_SysRes_Error)( VKI_EINVAL );
1257 }
1258 
1259 
1260 static
do_sigprocmask_bitops(Int vki_how,vki_sigset_t * orig_set,vki_sigset_t * modifier)1261 void do_sigprocmask_bitops ( Int vki_how,
1262 			     vki_sigset_t* orig_set,
1263 			     vki_sigset_t* modifier )
1264 {
1265    switch (vki_how) {
1266       case VKI_SIG_BLOCK:
1267          VG_(sigaddset_from_set)( orig_set, modifier );
1268          break;
1269       case VKI_SIG_UNBLOCK:
1270          VG_(sigdelset_from_set)( orig_set, modifier );
1271          break;
1272       case VKI_SIG_SETMASK:
1273          *orig_set = *modifier;
1274          break;
1275       default:
1276          VG_(core_panic)("do_sigprocmask_bitops");
1277 	 break;
1278    }
1279 }
1280 
1281 static
format_sigset(const vki_sigset_t * set)1282 HChar* format_sigset ( const vki_sigset_t* set )
1283 {
1284    static HChar buf[_VKI_NSIG_WORDS * 16 + 1];
1285    int w;
1286 
1287    VG_(strcpy)(buf, "");
1288 
1289    for (w = _VKI_NSIG_WORDS - 1; w >= 0; w--)
1290    {
1291 #     if _VKI_NSIG_BPW == 32
1292       VG_(sprintf)(buf + VG_(strlen)(buf), "%08llx",
1293                    set ? (ULong)set->sig[w] : 0);
1294 #     elif _VKI_NSIG_BPW == 64
1295       VG_(sprintf)(buf + VG_(strlen)(buf), "%16llx",
1296                    set ? (ULong)set->sig[w] : 0);
1297 #     else
1298 #       error "Unsupported value for _VKI_NSIG_BPW"
1299 #     endif
1300    }
1301 
1302    return buf;
1303 }
1304 
1305 /*
1306    This updates the thread's signal mask.  There's no such thing as a
1307    process-wide signal mask.
1308 
1309    Note that the thread signal masks are an implicit part of SCSS,
1310    which is why this routine is allowed to mess with them.
1311 */
1312 static
do_setmask(ThreadId tid,Int how,vki_sigset_t * newset,vki_sigset_t * oldset)1313 void do_setmask ( ThreadId tid,
1314                   Int how,
1315                   vki_sigset_t* newset,
1316 		  vki_sigset_t* oldset )
1317 {
1318    if (VG_(clo_trace_signals))
1319       VG_(dmsg)("do_setmask: tid = %u how = %d (%s), newset = %p (%s)\n",
1320                 tid, how,
1321                 how==VKI_SIG_BLOCK ? "SIG_BLOCK" : (
1322                    how==VKI_SIG_UNBLOCK ? "SIG_UNBLOCK" : (
1323                       how==VKI_SIG_SETMASK ? "SIG_SETMASK" : "???")),
1324                 newset, newset ? format_sigset(newset) : "NULL" );
1325 
1326    /* Just do this thread. */
1327    vg_assert(VG_(is_valid_tid)(tid));
1328    if (oldset) {
1329       *oldset = VG_(threads)[tid].sig_mask;
1330       if (VG_(clo_trace_signals))
1331          VG_(dmsg)("\toldset=%p %s\n", oldset, format_sigset(oldset));
1332    }
1333    if (newset) {
1334       do_sigprocmask_bitops (how, &VG_(threads)[tid].sig_mask, newset );
1335       VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGKILL);
1336       VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGSTOP);
1337       VG_(threads)[tid].tmp_sig_mask = VG_(threads)[tid].sig_mask;
1338    }
1339 }
1340 
1341 
VG_(do_sys_sigprocmask)1342 SysRes VG_(do_sys_sigprocmask) ( ThreadId tid,
1343                                  Int how,
1344                                  vki_sigset_t* set,
1345                                  vki_sigset_t* oldset )
1346 {
1347    switch(how) {
1348       case VKI_SIG_BLOCK:
1349       case VKI_SIG_UNBLOCK:
1350       case VKI_SIG_SETMASK:
1351          vg_assert(VG_(is_valid_tid)(tid));
1352          do_setmask ( tid, how, set, oldset );
1353          return VG_(mk_SysRes_Success)( 0 );
1354 
1355       default:
1356          VG_(dmsg)("sigprocmask: unknown 'how' field %d\n", how);
1357          return VG_(mk_SysRes_Error)( VKI_EINVAL );
1358    }
1359 }
1360 
1361 
1362 /* ---------------------------------------------------------------------
1363    LOW LEVEL STUFF TO DO WITH SIGNALS: IMPLEMENTATION
1364    ------------------------------------------------------------------ */
1365 
1366 /* ---------------------------------------------------------------------
1367    Handy utilities to block/restore all host signals.
1368    ------------------------------------------------------------------ */
1369 
1370 /* Block all host signals, dumping the old mask in *saved_mask. */
block_all_host_signals(vki_sigset_t * saved_mask)1371 static void block_all_host_signals ( /* OUT */ vki_sigset_t* saved_mask )
1372 {
1373    Int           ret;
1374    vki_sigset_t block_procmask;
1375    VG_(sigfillset)(&block_procmask);
1376    ret = VG_(sigprocmask)
1377             (VKI_SIG_SETMASK, &block_procmask, saved_mask);
1378    vg_assert(ret == 0);
1379 }
1380 
1381 /* Restore the blocking mask using the supplied saved one. */
restore_all_host_signals(vki_sigset_t * saved_mask)1382 static void restore_all_host_signals ( /* IN */ vki_sigset_t* saved_mask )
1383 {
1384    Int ret;
1385    ret = VG_(sigprocmask)(VKI_SIG_SETMASK, saved_mask, NULL);
1386    vg_assert(ret == 0);
1387 }
1388 
VG_(clear_out_queued_signals)1389 void VG_(clear_out_queued_signals)( ThreadId tid, vki_sigset_t* saved_mask )
1390 {
1391    block_all_host_signals(saved_mask);
1392    if (VG_(threads)[tid].sig_queue != NULL) {
1393       VG_(free)(VG_(threads)[tid].sig_queue);
1394       VG_(threads)[tid].sig_queue = NULL;
1395    }
1396    restore_all_host_signals(saved_mask);
1397 }
1398 
1399 /* ---------------------------------------------------------------------
1400    The signal simulation proper.  A simplified version of what the
1401    Linux kernel does.
1402    ------------------------------------------------------------------ */
1403 
1404 /* Set up a stack frame (VgSigContext) for the client's signal
1405    handler. */
1406 static
push_signal_frame(ThreadId tid,const vki_siginfo_t * siginfo,const struct vki_ucontext * uc)1407 void push_signal_frame ( ThreadId tid, const vki_siginfo_t *siginfo,
1408                                        const struct vki_ucontext *uc )
1409 {
1410    Bool         on_altstack;
1411    Addr         esp_top_of_frame;
1412    ThreadState* tst;
1413    Int		sigNo = siginfo->si_signo;
1414 
1415    vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
1416    vg_assert(VG_(is_valid_tid)(tid));
1417    tst = & VG_(threads)[tid];
1418 
1419    if (VG_(clo_trace_signals)) {
1420       VG_(dmsg)("push_signal_frame (thread %u): signal %d\n", tid, sigNo);
1421       VG_(get_and_pp_StackTrace)(tid, 10);
1422    }
1423 
1424    if (/* this signal asked to run on an alt stack */
1425        (scss.scss_per_sig[sigNo].scss_flags & VKI_SA_ONSTACK )
1426        && /* there is a defined and enabled alt stack, which we're not
1427              already using.  Logic from get_sigframe in
1428              arch/i386/kernel/signal.c. */
1429           sas_ss_flags(tid, VG_(get_SP)(tid)) == 0
1430       ) {
1431       on_altstack = True;
1432       esp_top_of_frame
1433          = (Addr)(tst->altstack.ss_sp) + tst->altstack.ss_size;
1434       if (VG_(clo_trace_signals))
1435          VG_(dmsg)("delivering signal %d (%s) to thread %u: "
1436                    "on ALT STACK (%p-%p; %ld bytes)\n",
1437                    sigNo, VG_(signame)(sigNo), tid, tst->altstack.ss_sp,
1438                    (UChar *)tst->altstack.ss_sp + tst->altstack.ss_size,
1439                    (Word)tst->altstack.ss_size );
1440    } else {
1441       on_altstack = False;
1442       esp_top_of_frame = VG_(get_SP)(tid) - VG_STACK_REDZONE_SZB;
1443    }
1444 
1445    /* Signal delivery to tools */
1446    VG_TRACK( pre_deliver_signal, tid, sigNo, on_altstack );
1447 
1448    vg_assert(scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_IGN);
1449    vg_assert(scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_DFL);
1450 
1451    /* This may fail if the client stack is busted; if that happens,
1452       the whole process will exit rather than simply calling the
1453       signal handler. */
1454    VG_(sigframe_create) (tid, on_altstack, esp_top_of_frame, siginfo, uc,
1455                          scss.scss_per_sig[sigNo].scss_handler,
1456                          scss.scss_per_sig[sigNo].scss_flags,
1457                          &tst->sig_mask,
1458                          scss.scss_per_sig[sigNo].scss_restorer);
1459 }
1460 
1461 
VG_(signame)1462 const HChar *VG_(signame)(Int sigNo)
1463 {
1464    static HChar buf[20];  // large enough
1465 
1466    switch(sigNo) {
1467       case VKI_SIGHUP:    return "SIGHUP";
1468       case VKI_SIGINT:    return "SIGINT";
1469       case VKI_SIGQUIT:   return "SIGQUIT";
1470       case VKI_SIGILL:    return "SIGILL";
1471       case VKI_SIGTRAP:   return "SIGTRAP";
1472       case VKI_SIGABRT:   return "SIGABRT";
1473       case VKI_SIGBUS:    return "SIGBUS";
1474       case VKI_SIGFPE:    return "SIGFPE";
1475       case VKI_SIGKILL:   return "SIGKILL";
1476       case VKI_SIGUSR1:   return "SIGUSR1";
1477       case VKI_SIGUSR2:   return "SIGUSR2";
1478       case VKI_SIGSEGV:   return "SIGSEGV";
1479       case VKI_SIGSYS:    return "SIGSYS";
1480       case VKI_SIGPIPE:   return "SIGPIPE";
1481       case VKI_SIGALRM:   return "SIGALRM";
1482       case VKI_SIGTERM:   return "SIGTERM";
1483 #     if defined(VKI_SIGSTKFLT)
1484       case VKI_SIGSTKFLT: return "SIGSTKFLT";
1485 #     endif
1486       case VKI_SIGCHLD:   return "SIGCHLD";
1487       case VKI_SIGCONT:   return "SIGCONT";
1488       case VKI_SIGSTOP:   return "SIGSTOP";
1489       case VKI_SIGTSTP:   return "SIGTSTP";
1490       case VKI_SIGTTIN:   return "SIGTTIN";
1491       case VKI_SIGTTOU:   return "SIGTTOU";
1492       case VKI_SIGURG:    return "SIGURG";
1493       case VKI_SIGXCPU:   return "SIGXCPU";
1494       case VKI_SIGXFSZ:   return "SIGXFSZ";
1495       case VKI_SIGVTALRM: return "SIGVTALRM";
1496       case VKI_SIGPROF:   return "SIGPROF";
1497       case VKI_SIGWINCH:  return "SIGWINCH";
1498       case VKI_SIGIO:     return "SIGIO";
1499 #     if defined(VKI_SIGPWR)
1500       case VKI_SIGPWR:    return "SIGPWR";
1501 #     endif
1502 #     if defined(VKI_SIGUNUSED) && (VKI_SIGUNUSED != VKI_SIGSYS)
1503       case VKI_SIGUNUSED: return "SIGUNUSED";
1504 #     endif
1505 
1506       /* Solaris-specific signals. */
1507 #     if defined(VKI_SIGEMT)
1508       case VKI_SIGEMT:    return "SIGEMT";
1509 #     endif
1510 #     if defined(VKI_SIGWAITING)
1511       case VKI_SIGWAITING: return "SIGWAITING";
1512 #     endif
1513 #     if defined(VKI_SIGLWP)
1514       case VKI_SIGLWP:    return "SIGLWP";
1515 #     endif
1516 #     if defined(VKI_SIGFREEZE)
1517       case VKI_SIGFREEZE: return "SIGFREEZE";
1518 #     endif
1519 #     if defined(VKI_SIGTHAW)
1520       case VKI_SIGTHAW:   return "SIGTHAW";
1521 #     endif
1522 #     if defined(VKI_SIGCANCEL)
1523       case VKI_SIGCANCEL: return "SIGCANCEL";
1524 #     endif
1525 #     if defined(VKI_SIGLOST)
1526       case VKI_SIGLOST:   return "SIGLOST";
1527 #     endif
1528 #     if defined(VKI_SIGXRES)
1529       case VKI_SIGXRES:   return "SIGXRES";
1530 #     endif
1531 #     if defined(VKI_SIGJVM1)
1532       case VKI_SIGJVM1:   return "SIGJVM1";
1533 #     endif
1534 #     if defined(VKI_SIGJVM2)
1535       case VKI_SIGJVM2:   return "SIGJVM2";
1536 #     endif
1537 
1538 #  if defined(VKI_SIGRTMIN) && defined(VKI_SIGRTMAX)
1539    case VKI_SIGRTMIN ... VKI_SIGRTMAX:
1540       VG_(sprintf)(buf, "SIGRT%d", sigNo-VKI_SIGRTMIN);
1541       return buf;
1542 #  endif
1543 
1544    default:
1545       VG_(sprintf)(buf, "SIG%d", sigNo);
1546       return buf;
1547    }
1548 }
1549 
1550 /* Hit ourselves with a signal using the default handler */
VG_(kill_self)1551 void VG_(kill_self)(Int sigNo)
1552 {
1553    Int r;
1554    vki_sigset_t	         mask, origmask;
1555    vki_sigaction_toK_t   sa, origsa2;
1556    vki_sigaction_fromK_t origsa;
1557 
1558    sa.ksa_handler = VKI_SIG_DFL;
1559    sa.sa_flags = 0;
1560 #  if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1561       !defined(VGO_solaris)
1562    sa.sa_restorer = 0;
1563 #  endif
1564    VG_(sigemptyset)(&sa.sa_mask);
1565 
1566    VG_(sigaction)(sigNo, &sa, &origsa);
1567 
1568    VG_(sigemptyset)(&mask);
1569    VG_(sigaddset)(&mask, sigNo);
1570    VG_(sigprocmask)(VKI_SIG_UNBLOCK, &mask, &origmask);
1571 
1572    r = VG_(kill)(VG_(getpid)(), sigNo);
1573 #  if !defined(VGO_darwin)
1574    /* This sometimes fails with EPERM on Darwin.  I don't know why. */
1575    vg_assert(r == 0);
1576 #  endif
1577 
1578    VG_(convert_sigaction_fromK_to_toK)( &origsa, &origsa2 );
1579    VG_(sigaction)(sigNo, &origsa2, NULL);
1580    VG_(sigprocmask)(VKI_SIG_SETMASK, &origmask, NULL);
1581 }
1582 
1583 // The si_code describes where the signal came from.  Some come from the
1584 // kernel, eg.: seg faults, illegal opcodes.  Some come from the user, eg.:
1585 // from kill() (SI_USER), or timer_settime() (SI_TIMER), or an async I/O
1586 // request (SI_ASYNCIO).  There's lots of implementation-defined leeway in
1587 // POSIX, but the user vs. kernal distinction is what we want here.  We also
1588 // pass in some other details that can help when si_code is unreliable.
is_signal_from_kernel(ThreadId tid,int signum,int si_code)1589 static Bool is_signal_from_kernel(ThreadId tid, int signum, int si_code)
1590 {
1591 #  if defined(VGO_linux) || defined(VGO_solaris)
1592    // On Linux, SI_USER is zero, negative values are from the user, positive
1593    // values are from the kernel.  There are SI_FROMUSER and SI_FROMKERNEL
1594    // macros but we don't use them here because other platforms don't have
1595    // them.
1596    return ( si_code > VKI_SI_USER ? True : False );
1597 
1598 #  elif defined(VGO_darwin)
1599    // On Darwin 9.6.0, the si_code is completely unreliable.  It should be the
1600    // case that 0 means "user", and >0 means "kernel".  But:
1601    // - For SIGSEGV, it seems quite reliable.
1602    // - For SIGBUS, it's always 2.
1603    // - For SIGFPE, it's often 0, even for kernel ones (eg.
1604    //   div-by-integer-zero always gives zero).
1605    // - For SIGILL, it's unclear.
1606    // - For SIGTRAP, it's always 1.
1607    // You can see the "NOTIMP" (not implemented) status of a number of the
1608    // sub-cases in sys/signal.h.  Hopefully future versions of Darwin will
1609    // get this right.
1610 
1611    // If we're blocked waiting on a syscall, it must be a user signal, because
1612    // the kernel won't generate sync signals within syscalls.
1613    if (VG_(threads)[tid].status == VgTs_WaitSys) {
1614       return False;
1615 
1616    // If it's a SIGSEGV, use the proper condition, since it's fairly reliable.
1617    } else if (SIGSEGV == signum) {
1618       return ( si_code > 0 ? True : False );
1619 
1620    // If it's anything else, assume it's kernel-generated.  Reason being that
1621    // kernel-generated sync signals are more common, and it's probable that
1622    // misdiagnosing a user signal as a kernel signal is better than the
1623    // opposite.
1624    } else {
1625       return True;
1626    }
1627 #  else
1628 #    error Unknown OS
1629 #  endif
1630 }
1631 
1632 /*
1633    Perform the default action of a signal.  If the signal is fatal, it
1634    terminates all other threads, but it doesn't actually kill
1635    the process and calling thread.
1636 
1637    If we're not being quiet, then print out some more detail about
1638    fatal signals (esp. core dumping signals).
1639  */
default_action(const vki_siginfo_t * info,ThreadId tid)1640 static void default_action(const vki_siginfo_t *info, ThreadId tid)
1641 {
1642    Int  sigNo     = info->si_signo;
1643    Bool terminate = False;	/* kills process         */
1644    Bool core      = False;	/* kills process w/ core */
1645    struct vki_rlimit corelim;
1646    Bool could_core;
1647    ThreadState* tst = VG_(get_ThreadState)(tid);
1648 
1649    vg_assert(VG_(is_running_thread)(tid));
1650 
1651    switch(sigNo) {
1652       case VKI_SIGQUIT:	/* core */
1653       case VKI_SIGILL:	/* core */
1654       case VKI_SIGABRT:	/* core */
1655       case VKI_SIGFPE:	/* core */
1656       case VKI_SIGSEGV:	/* core */
1657       case VKI_SIGBUS:	/* core */
1658       case VKI_SIGTRAP:	/* core */
1659       case VKI_SIGSYS:	/* core */
1660       case VKI_SIGXCPU:	/* core */
1661       case VKI_SIGXFSZ:	/* core */
1662 
1663       /* Solaris-specific signals. */
1664 #     if defined(VKI_SIGEMT)
1665       case VKI_SIGEMT:	/* core */
1666 #     endif
1667 
1668          terminate = True;
1669          core = True;
1670          break;
1671 
1672       case VKI_SIGHUP:	/* term */
1673       case VKI_SIGINT:	/* term */
1674       case VKI_SIGKILL:	/* term - we won't see this */
1675       case VKI_SIGPIPE:	/* term */
1676       case VKI_SIGALRM:	/* term */
1677       case VKI_SIGTERM:	/* term */
1678       case VKI_SIGUSR1:	/* term */
1679       case VKI_SIGUSR2:	/* term */
1680       case VKI_SIGIO:	/* term */
1681 #     if defined(VKI_SIGPWR)
1682       case VKI_SIGPWR:	/* term */
1683 #     endif
1684       case VKI_SIGPROF:	/* term */
1685       case VKI_SIGVTALRM:	/* term */
1686 #     if defined(VKI_SIGRTMIN) && defined(VKI_SIGRTMAX)
1687       case VKI_SIGRTMIN ... VKI_SIGRTMAX: /* term */
1688 #     endif
1689 
1690       /* Solaris-specific signals. */
1691 #     if defined(VKI_SIGLOST)
1692       case VKI_SIGLOST:	/* term */
1693 #     endif
1694 
1695          terminate = True;
1696          break;
1697    }
1698 
1699    vg_assert(!core || (core && terminate));
1700 
1701    if (VG_(clo_trace_signals))
1702       VG_(dmsg)("delivering %d (code %d) to default handler; action: %s%s\n",
1703                 sigNo, info->si_code, terminate ? "terminate" : "ignore",
1704                 core ? "+core" : "");
1705 
1706    if (!terminate)
1707       return;			/* nothing to do */
1708 
1709 #if defined(VGO_linux)
1710    if (terminate && (tst->ptrace & VKI_PT_PTRACED)
1711        && (sigNo != VKI_SIGKILL)) {
1712       VG_(kill)(VG_(getpid)(), VKI_SIGSTOP);
1713       return;
1714    }
1715 #endif
1716 
1717    could_core = core;
1718 
1719    if (core) {
1720       /* If they set the core-size limit to zero, don't generate a
1721 	 core file */
1722 
1723       VG_(getrlimit)(VKI_RLIMIT_CORE, &corelim);
1724 
1725       if (corelim.rlim_cur == 0)
1726 	 core = False;
1727    }
1728 
1729    if ( VG_(clo_verbosity) >= 1
1730         || (could_core && is_signal_from_kernel(tid, sigNo, info->si_code))
1731         || VG_(clo_xml) ) {
1732       if (VG_(clo_xml)) {
1733          VG_(printf_xml)("<fatal_signal>\n");
1734          VG_(printf_xml)("  <tid>%d</tid>\n", tid);
1735          if (tst->thread_name) {
1736             VG_(printf_xml)("  <threadname>%s</threadname>\n",
1737                             tst->thread_name);
1738          }
1739          VG_(printf_xml)("  <signo>%d</signo>\n", sigNo);
1740          VG_(printf_xml)("  <signame>%s</signame>\n", VG_(signame)(sigNo));
1741          VG_(printf_xml)("  <sicode>%d</sicode>\n", info->si_code);
1742       } else {
1743          VG_(umsg)(
1744             "\n"
1745             "Process terminating with default action of signal %d (%s)%s\n",
1746             sigNo, VG_(signame)(sigNo), core ? ": dumping core" : "");
1747       }
1748 
1749       /* Be helpful - decode some more details about this fault */
1750       if (is_signal_from_kernel(tid, sigNo, info->si_code)) {
1751 	 const HChar *event = NULL;
1752 	 Bool haveaddr = True;
1753 
1754 	 switch(sigNo) {
1755 	 case VKI_SIGSEGV:
1756 	    switch(info->si_code) {
1757 	    case VKI_SEGV_MAPERR: event = "Access not within mapped region";
1758                                   break;
1759 	    case VKI_SEGV_ACCERR: event = "Bad permissions for mapped region";
1760                                   break;
1761 	    case VKI_SEGV_MADE_UP_GPF:
1762 	       /* General Protection Fault: The CPU/kernel
1763 		  isn't telling us anything useful, but this
1764 		  is commonly the result of exceeding a
1765 		  segment limit. */
1766 	       event = "General Protection Fault";
1767 	       haveaddr = False;
1768 	       break;
1769 	    }
1770 #if 0
1771             {
1772               HChar buf[50];  // large enough
1773               VG_(am_show_nsegments)(0,"post segfault");
1774               VG_(sprintf)(buf, "/bin/cat /proc/%d/maps", VG_(getpid)());
1775               VG_(system)(buf);
1776             }
1777 #endif
1778 	    break;
1779 
1780 	 case VKI_SIGILL:
1781 	    switch(info->si_code) {
1782 	    case VKI_ILL_ILLOPC: event = "Illegal opcode"; break;
1783 	    case VKI_ILL_ILLOPN: event = "Illegal operand"; break;
1784 	    case VKI_ILL_ILLADR: event = "Illegal addressing mode"; break;
1785 	    case VKI_ILL_ILLTRP: event = "Illegal trap"; break;
1786 	    case VKI_ILL_PRVOPC: event = "Privileged opcode"; break;
1787 	    case VKI_ILL_PRVREG: event = "Privileged register"; break;
1788 	    case VKI_ILL_COPROC: event = "Coprocessor error"; break;
1789 	    case VKI_ILL_BADSTK: event = "Internal stack error"; break;
1790 	    }
1791 	    break;
1792 
1793 	 case VKI_SIGFPE:
1794 	    switch (info->si_code) {
1795 	    case VKI_FPE_INTDIV: event = "Integer divide by zero"; break;
1796 	    case VKI_FPE_INTOVF: event = "Integer overflow"; break;
1797 	    case VKI_FPE_FLTDIV: event = "FP divide by zero"; break;
1798 	    case VKI_FPE_FLTOVF: event = "FP overflow"; break;
1799 	    case VKI_FPE_FLTUND: event = "FP underflow"; break;
1800 	    case VKI_FPE_FLTRES: event = "FP inexact"; break;
1801 	    case VKI_FPE_FLTINV: event = "FP invalid operation"; break;
1802 	    case VKI_FPE_FLTSUB: event = "FP subscript out of range"; break;
1803 
1804             /* Solaris-specific codes. */
1805 #           if defined(VKI_FPE_FLTDEN)
1806 	    case VKI_FPE_FLTDEN: event = "FP denormalize"; break;
1807 #           endif
1808 	    }
1809 	    break;
1810 
1811 	 case VKI_SIGBUS:
1812 	    switch (info->si_code) {
1813 	    case VKI_BUS_ADRALN: event = "Invalid address alignment"; break;
1814 	    case VKI_BUS_ADRERR: event = "Non-existent physical address"; break;
1815 	    case VKI_BUS_OBJERR: event = "Hardware error"; break;
1816 	    }
1817 	    break;
1818 	 } /* switch (sigNo) */
1819 
1820          if (VG_(clo_xml)) {
1821             if (event != NULL)
1822                VG_(printf_xml)("  <event>%s</event>\n", event);
1823             if (haveaddr)
1824                VG_(printf_xml)("  <siaddr>%p</siaddr>\n",
1825                                info->VKI_SIGINFO_si_addr);
1826          } else {
1827             if (event != NULL) {
1828                if (haveaddr)
1829                   VG_(umsg)(" %s at address %p\n",
1830                             event, info->VKI_SIGINFO_si_addr);
1831                else
1832                   VG_(umsg)(" %s\n", event);
1833             }
1834          }
1835       }
1836       /* Print a stack trace.  Be cautious if the thread's SP is in an
1837          obviously stupid place (not mapped readable) that would
1838          likely cause a segfault. */
1839       if (VG_(is_valid_tid)(tid)) {
1840          Word first_ip_delta = 0;
1841 #if defined(VGO_linux) || defined(VGO_solaris)
1842          /* Make sure that the address stored in the stack pointer is
1843             located in a mapped page. That is not necessarily so. E.g.
1844             consider the scenario where the stack pointer was decreased
1845             and now has a value that is just below the end of a page that has
1846             not been mapped yet. In that case VG_(am_is_valid_for_client)
1847             will consider the address of the stack pointer invalid and that
1848             would cause a back-trace of depth 1 to be printed, instead of a
1849             full back-trace. */
1850          if (tid == 1) {           // main thread
1851             Addr esp  = VG_(get_SP)(tid);
1852             Addr base = VG_PGROUNDDN(esp - VG_STACK_REDZONE_SZB);
1853             if (VG_(am_addr_is_in_extensible_client_stack)(base)
1854                 && VG_(extend_stack)(tid, base)) {
1855                if (VG_(clo_trace_signals))
1856                   VG_(dmsg)("       -> extended stack base to %#lx\n",
1857                             VG_PGROUNDDN(esp));
1858             }
1859          }
1860 #endif
1861 #if defined(VGA_s390x)
1862          if (sigNo == VKI_SIGILL) {
1863             /* The guest instruction address has been adjusted earlier to
1864                point to the insn following the one that could not be decoded.
1865                When printing the back-trace here we need to undo that
1866                adjustment so the first line in the back-trace reports the
1867                correct address. */
1868             Addr  addr = (Addr)info->VKI_SIGINFO_si_addr;
1869             UChar byte = ((UChar *)addr)[0];
1870             Int   insn_length = ((((byte >> 6) + 1) >> 1) + 1) << 1;
1871 
1872             first_ip_delta = -insn_length;
1873          }
1874 #endif
1875          ExeContext* ec = VG_(am_is_valid_for_client)
1876                              (VG_(get_SP)(tid), sizeof(Addr), VKI_PROT_READ)
1877                         ? VG_(record_ExeContext)( tid, first_ip_delta )
1878                       : VG_(record_depth_1_ExeContext)( tid,
1879                                                         first_ip_delta );
1880          vg_assert(ec);
1881          VG_(pp_ExeContext)( ec );
1882       }
1883       if (sigNo == VKI_SIGSEGV
1884           && is_signal_from_kernel(tid, sigNo, info->si_code)
1885           && info->si_code == VKI_SEGV_MAPERR) {
1886          VG_(umsg)(" If you believe this happened as a result of a stack\n" );
1887          VG_(umsg)(" overflow in your program's main thread (unlikely but\n");
1888          VG_(umsg)(" possible), you can try to increase the size of the\n"  );
1889          VG_(umsg)(" main thread stack using the --main-stacksize= flag.\n" );
1890          // FIXME: assumes main ThreadId == 1
1891          if (VG_(is_valid_tid)(1)) {
1892             VG_(umsg)(
1893                " The main thread stack size used in this run was %lu.\n",
1894                VG_(threads)[1].client_stack_szB);
1895          }
1896       }
1897       if (VG_(clo_xml)) {
1898          /* postamble */
1899          VG_(printf_xml)("</fatal_signal>\n");
1900          VG_(printf_xml)("\n");
1901       }
1902    }
1903 
1904    if (VG_(clo_vgdb) != Vg_VgdbNo
1905        && VG_(dyn_vgdb_error) <= VG_(get_n_errs_shown)() + 1) {
1906       /* Note: we add + 1 to n_errs_shown as the fatal signal was not
1907          reported through error msg, and so was not counted. */
1908       VG_(gdbserver_report_fatal_signal) (info, tid);
1909    }
1910 
1911    if (core) {
1912       static const struct vki_rlimit zero = { 0, 0 };
1913 
1914       VG_(make_coredump)(tid, info, corelim.rlim_cur);
1915 
1916       /* Make sure we don't get a confusing kernel-generated
1917 	 coredump when we finally exit */
1918       VG_(setrlimit)(VKI_RLIMIT_CORE, &zero);
1919    }
1920 
1921    // what's this for?
1922    //VG_(threads)[VG_(master_tid)].os_state.fatalsig = sigNo;
1923 
1924    /* everyone but tid dies */
1925    VG_(nuke_all_threads_except)(tid, VgSrc_FatalSig);
1926    VG_(reap_threads)(tid);
1927    /* stash fatal signal in this thread */
1928    VG_(threads)[tid].exitreason = VgSrc_FatalSig;
1929    VG_(threads)[tid].os_state.fatalsig = sigNo;
1930 }
1931 
1932 /*
1933    This does the business of delivering a signal to a thread.  It may
1934    be called from either a real signal handler, or from normal code to
1935    cause the thread to enter the signal handler.
1936 
1937    This updates the thread state, but it does not set it to be
1938    Runnable.
1939 */
deliver_signal(ThreadId tid,const vki_siginfo_t * info,const struct vki_ucontext * uc)1940 static void deliver_signal ( ThreadId tid, const vki_siginfo_t *info,
1941                                            const struct vki_ucontext *uc )
1942 {
1943    Int			sigNo = info->si_signo;
1944    SCSS_Per_Signal	*handler = &scss.scss_per_sig[sigNo];
1945    void			*handler_fn;
1946    ThreadState		*tst = VG_(get_ThreadState)(tid);
1947 
1948    if (VG_(clo_trace_signals))
1949       VG_(dmsg)("delivering signal %d (%s):%d to thread %u\n",
1950                 sigNo, VG_(signame)(sigNo), info->si_code, tid );
1951 
1952    if (sigNo == VG_SIGVGKILL) {
1953       /* If this is a SIGVGKILL, we're expecting it to interrupt any
1954 	 blocked syscall.  It doesn't matter whether the VCPU state is
1955 	 set to restart or not, because we don't expect it will
1956 	 execute any more client instructions. */
1957       vg_assert(VG_(is_exiting)(tid));
1958       return;
1959    }
1960 
1961    /* If the client specifies SIG_IGN, treat it as SIG_DFL.
1962 
1963       If deliver_signal() is being called on a thread, we want
1964       the signal to get through no matter what; if they're ignoring
1965       it, then we do this override (this is so we can send it SIGSEGV,
1966       etc). */
1967    handler_fn = handler->scss_handler;
1968    if (handler_fn == VKI_SIG_IGN)
1969       handler_fn = VKI_SIG_DFL;
1970 
1971    vg_assert(handler_fn != VKI_SIG_IGN);
1972 
1973    if (handler_fn == VKI_SIG_DFL) {
1974       default_action(info, tid);
1975    } else {
1976       /* Create a signal delivery frame, and set the client's %ESP and
1977 	 %EIP so that when execution continues, we will enter the
1978 	 signal handler with the frame on top of the client's stack,
1979 	 as it expects.
1980 
1981 	 Signal delivery can fail if the client stack is too small or
1982 	 missing, and we can't push the frame.  If that happens,
1983 	 push_signal_frame will cause the whole process to exit when
1984 	 we next hit the scheduler.
1985       */
1986       vg_assert(VG_(is_valid_tid)(tid));
1987 
1988       push_signal_frame ( tid, info, uc );
1989 
1990       if (handler->scss_flags & VKI_SA_ONESHOT) {
1991 	 /* Do the ONESHOT thing. */
1992 	 handler->scss_handler = VKI_SIG_DFL;
1993 
1994 	 handle_SCSS_change( False /* lazy update */ );
1995       }
1996 
1997       /* At this point:
1998 	 tst->sig_mask is the current signal mask
1999 	 tst->tmp_sig_mask is the same as sig_mask, unless we're in sigsuspend
2000 	 handler->scss_mask is the mask set by the handler
2001 
2002 	 Handler gets a mask of tmp_sig_mask|handler_mask|signo
2003        */
2004       tst->sig_mask = tst->tmp_sig_mask;
2005       if (!(handler->scss_flags & VKI_SA_NOMASK)) {
2006 	 VG_(sigaddset_from_set)(&tst->sig_mask, &handler->scss_mask);
2007 	 VG_(sigaddset)(&tst->sig_mask, sigNo);
2008 	 tst->tmp_sig_mask = tst->sig_mask;
2009       }
2010    }
2011 
2012    /* Thread state is ready to go - just add Runnable */
2013 }
2014 
resume_scheduler(ThreadId tid)2015 static void resume_scheduler(ThreadId tid)
2016 {
2017    ThreadState *tst = VG_(get_ThreadState)(tid);
2018 
2019    vg_assert(tst->os_state.lwpid == VG_(gettid)());
2020 
2021    if (tst->sched_jmpbuf_valid) {
2022       /* Can't continue; must longjmp back to the scheduler and thus
2023          enter the sighandler immediately. */
2024       VG_MINIMAL_LONGJMP(tst->sched_jmpbuf);
2025    }
2026 }
2027 
synth_fault_common(ThreadId tid,Addr addr,Int si_code)2028 static void synth_fault_common(ThreadId tid, Addr addr, Int si_code)
2029 {
2030    vki_siginfo_t info;
2031 
2032    vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2033 
2034    VG_(memset)(&info, 0, sizeof(info));
2035    info.si_signo = VKI_SIGSEGV;
2036    info.si_code = si_code;
2037    info.VKI_SIGINFO_si_addr = (void*)addr;
2038 
2039    /* Even if gdbserver indicates to ignore the signal, we must deliver it.
2040       So ignore the return value of VG_(gdbserver_report_signal). */
2041    (void) VG_(gdbserver_report_signal) (&info, tid);
2042 
2043    /* If they're trying to block the signal, force it to be delivered */
2044    if (VG_(sigismember)(&VG_(threads)[tid].sig_mask, VKI_SIGSEGV))
2045       VG_(set_default_handler)(VKI_SIGSEGV);
2046 
2047    deliver_signal(tid, &info, NULL);
2048 }
2049 
2050 // Synthesize a fault where the address is OK, but the page
2051 // permissions are bad.
VG_(synth_fault_perms)2052 void VG_(synth_fault_perms)(ThreadId tid, Addr addr)
2053 {
2054    synth_fault_common(tid, addr, VKI_SEGV_ACCERR);
2055 }
2056 
2057 // Synthesize a fault where the address there's nothing mapped at the address.
VG_(synth_fault_mapping)2058 void VG_(synth_fault_mapping)(ThreadId tid, Addr addr)
2059 {
2060    synth_fault_common(tid, addr, VKI_SEGV_MAPERR);
2061 }
2062 
2063 // Synthesize a misc memory fault.
VG_(synth_fault)2064 void VG_(synth_fault)(ThreadId tid)
2065 {
2066    synth_fault_common(tid, 0, VKI_SEGV_MADE_UP_GPF);
2067 }
2068 
2069 // Synthesise a SIGILL.
VG_(synth_sigill)2070 void VG_(synth_sigill)(ThreadId tid, Addr addr)
2071 {
2072    vki_siginfo_t info;
2073 
2074    vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2075 
2076    VG_(memset)(&info, 0, sizeof(info));
2077    info.si_signo = VKI_SIGILL;
2078    info.si_code  = VKI_ILL_ILLOPC; /* jrs: no idea what this should be */
2079    info.VKI_SIGINFO_si_addr = (void*)addr;
2080 
2081    if (VG_(gdbserver_report_signal) (&info, tid)) {
2082       resume_scheduler(tid);
2083       deliver_signal(tid, &info, NULL);
2084    }
2085    else
2086       resume_scheduler(tid);
2087 }
2088 
2089 // Synthesise a SIGBUS.
VG_(synth_sigbus)2090 void VG_(synth_sigbus)(ThreadId tid)
2091 {
2092    vki_siginfo_t info;
2093 
2094    vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2095 
2096    VG_(memset)(&info, 0, sizeof(info));
2097    info.si_signo = VKI_SIGBUS;
2098    /* There are several meanings to SIGBUS (as per POSIX, presumably),
2099       but the most widely understood is "invalid address alignment",
2100       so let's use that. */
2101    info.si_code  = VKI_BUS_ADRALN;
2102    /* If we knew the invalid address in question, we could put it
2103       in .si_addr.  Oh well. */
2104    /* info.VKI_SIGINFO_si_addr = (void*)addr; */
2105 
2106    if (VG_(gdbserver_report_signal) (&info, tid)) {
2107       resume_scheduler(tid);
2108       deliver_signal(tid, &info, NULL);
2109    }
2110    else
2111       resume_scheduler(tid);
2112 }
2113 
2114 // Synthesise a SIGTRAP.
VG_(synth_sigtrap)2115 void VG_(synth_sigtrap)(ThreadId tid)
2116 {
2117    vki_siginfo_t info;
2118    struct vki_ucontext uc;
2119 #  if defined(VGP_x86_darwin)
2120    struct __darwin_mcontext32 mc;
2121 #  elif defined(VGP_amd64_darwin)
2122    struct __darwin_mcontext64 mc;
2123 #  endif
2124 
2125    vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2126 
2127    VG_(memset)(&info, 0, sizeof(info));
2128    VG_(memset)(&uc,   0, sizeof(uc));
2129    info.si_signo = VKI_SIGTRAP;
2130    info.si_code = VKI_TRAP_BRKPT; /* tjh: only ever called for a brkpt ins */
2131 
2132 #  if defined(VGP_x86_linux) || defined(VGP_amd64_linux)
2133    uc.uc_mcontext.trapno = 3;     /* tjh: this is the x86 trap number
2134                                           for a breakpoint trap... */
2135    uc.uc_mcontext.err = 0;        /* tjh: no error code for x86
2136                                           breakpoint trap... */
2137 #  elif defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
2138    /* the same thing, but using Darwin field/struct names */
2139    VG_(memset)(&mc, 0, sizeof(mc));
2140    uc.uc_mcontext = &mc;
2141    uc.uc_mcontext->__es.__trapno = 3;
2142    uc.uc_mcontext->__es.__err = 0;
2143 #  elif defined(VGP_x86_solaris)
2144    uc.uc_mcontext.gregs[VKI_ERR] = 0;
2145    uc.uc_mcontext.gregs[VKI_TRAPNO] = VKI_T_BPTFLT;
2146 #  endif
2147 
2148    /* fixs390: do we need to do anything here for s390 ? */
2149    if (VG_(gdbserver_report_signal) (&info, tid)) {
2150       resume_scheduler(tid);
2151       deliver_signal(tid, &info, &uc);
2152    }
2153    else
2154       resume_scheduler(tid);
2155 }
2156 
2157 // Synthesise a SIGFPE.
VG_(synth_sigfpe)2158 void VG_(synth_sigfpe)(ThreadId tid, UInt code)
2159 {
2160 // Only tested on mips32 and mips64
2161 #if !defined(VGA_mips32) && !defined(VGA_mips64)
2162    vg_assert(0);
2163 #else
2164    vki_siginfo_t info;
2165    struct vki_ucontext uc;
2166 
2167    vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2168 
2169    VG_(memset)(&info, 0, sizeof(info));
2170    VG_(memset)(&uc,   0, sizeof(uc));
2171    info.si_signo = VKI_SIGFPE;
2172    info.si_code = code;
2173 
2174    if (VG_(gdbserver_report_signal) (&info, tid)) {
2175       resume_scheduler(tid);
2176       deliver_signal(tid, &info, &uc);
2177    }
2178    else
2179       resume_scheduler(tid);
2180 #endif
2181 }
2182 
2183 /* Make a signal pending for a thread, for later delivery.
2184    VG_(poll_signals) will arrange for it to be delivered at the right
2185    time.
2186 
2187    tid==0 means add it to the process-wide queue, and not sent it to a
2188    specific thread.
2189 */
2190 static
queue_signal(ThreadId tid,const vki_siginfo_t * si)2191 void queue_signal(ThreadId tid, const vki_siginfo_t *si)
2192 {
2193    ThreadState *tst;
2194    SigQueue *sq;
2195    vki_sigset_t savedmask;
2196 
2197    tst = VG_(get_ThreadState)(tid);
2198 
2199    /* Protect the signal queue against async deliveries */
2200    block_all_host_signals(&savedmask);
2201 
2202    if (tst->sig_queue == NULL) {
2203       tst->sig_queue = VG_(malloc)("signals.qs.1", sizeof(*tst->sig_queue));
2204       VG_(memset)(tst->sig_queue, 0, sizeof(*tst->sig_queue));
2205    }
2206    sq = tst->sig_queue;
2207 
2208    if (VG_(clo_trace_signals))
2209       VG_(dmsg)("Queueing signal %d (idx %d) to thread %u\n",
2210                 si->si_signo, sq->next, tid);
2211 
2212    /* Add signal to the queue.  If the queue gets overrun, then old
2213       queued signals may get lost.
2214 
2215       XXX We should also keep a sigset of pending signals, so that at
2216       least a non-siginfo signal gets deliviered.
2217    */
2218    if (sq->sigs[sq->next].si_signo != 0)
2219       VG_(umsg)("Signal %d being dropped from thread %u's queue\n",
2220                 sq->sigs[sq->next].si_signo, tid);
2221 
2222    sq->sigs[sq->next] = *si;
2223    sq->next = (sq->next+1) % N_QUEUED_SIGNALS;
2224 
2225    restore_all_host_signals(&savedmask);
2226 }
2227 
2228 /*
2229    Returns the next queued signal for thread tid which is in "set".
2230    tid==0 means process-wide signal.  Set si_signo to 0 when the
2231    signal has been delivered.
2232 
2233    Must be called with all signals blocked, to protect against async
2234    deliveries.
2235 */
next_queued(ThreadId tid,const vki_sigset_t * set)2236 static vki_siginfo_t *next_queued(ThreadId tid, const vki_sigset_t *set)
2237 {
2238    ThreadState *tst = VG_(get_ThreadState)(tid);
2239    SigQueue *sq;
2240    Int idx;
2241    vki_siginfo_t *ret = NULL;
2242 
2243    sq = tst->sig_queue;
2244    if (sq == NULL)
2245       goto out;
2246 
2247    idx = sq->next;
2248    do {
2249       if (0)
2250 	 VG_(printf)("idx=%d si_signo=%d inset=%d\n", idx,
2251 		     sq->sigs[idx].si_signo,
2252                      VG_(sigismember)(set, sq->sigs[idx].si_signo));
2253 
2254       if (sq->sigs[idx].si_signo != 0
2255           && VG_(sigismember)(set, sq->sigs[idx].si_signo)) {
2256 	 if (VG_(clo_trace_signals))
2257             VG_(dmsg)("Returning queued signal %d (idx %d) for thread %u\n",
2258                       sq->sigs[idx].si_signo, idx, tid);
2259 	 ret = &sq->sigs[idx];
2260 	 goto out;
2261       }
2262 
2263       idx = (idx + 1) % N_QUEUED_SIGNALS;
2264    } while(idx != sq->next);
2265   out:
2266    return ret;
2267 }
2268 
sanitize_si_code(int si_code)2269 static int sanitize_si_code(int si_code)
2270 {
2271 #if defined(VGO_linux)
2272    /* The linux kernel uses the top 16 bits of si_code for it's own
2273       use and only exports the bottom 16 bits to user space - at least
2274       that is the theory, but it turns out that there are some kernels
2275       around that forget to mask out the top 16 bits so we do it here.
2276 
2277       The kernel treats the bottom 16 bits as signed and (when it does
2278       mask them off) sign extends them when exporting to user space so
2279       we do the same thing here. */
2280    return (Short)si_code;
2281 #elif defined(VGO_darwin) || defined(VGO_solaris)
2282    return si_code;
2283 #else
2284 #  error Unknown OS
2285 #endif
2286 }
2287 
2288 #if defined(VGO_solaris)
2289 /* Following function is used to switch Valgrind from a client stack back onto
2290    a Valgrind stack.  It is used only when the door_return call was invoked by
2291    the client because this is the only syscall which is executed directly on
2292    the client stack (see syscall-{x86,amd64}-solaris.S).  The switch onto the
2293    Valgrind stack has to be made as soon as possible because there is no
2294    guarantee that there is enough space on the client stack to run the
2295    complete signal machinery.  Also, Valgrind has to be switched back onto its
2296    stack before a simulated signal frame is created because that will
2297    overwrite the real sigframe built by the kernel. */
async_signalhandler_solaris_preprocess(ThreadId tid,Int * signo,vki_siginfo_t * info,struct vki_ucontext * uc)2298 static void async_signalhandler_solaris_preprocess(ThreadId tid, Int *signo,
2299                                                    vki_siginfo_t *info,
2300                                                    struct vki_ucontext *uc)
2301 {
2302 #  define RECURSION_BIT 0x1000
2303    Addr sp;
2304    vki_sigframe_t *frame;
2305    ThreadState *tst = VG_(get_ThreadState)(tid);
2306    Int rec_signo;
2307 
2308    /* If not doing door_return then return instantly. */
2309    if (!tst->os_state.in_door_return)
2310       return;
2311 
2312    /* Check for the recursion:
2313       v ...
2314       | async_signalhandler - executed on the client stack
2315       v async_signalhandler_solaris_preprocess - first call switches the
2316       |   stacks and sets the RECURSION_BIT flag
2317       v async_signalhandler - executed on the Valgrind stack
2318       | async_signalhandler_solaris_preprocess - the RECURSION_BIT flag is
2319       v   set, clear it and return
2320     */
2321    if (*signo & RECURSION_BIT) {
2322       *signo &= ~RECURSION_BIT;
2323       return;
2324    }
2325 
2326    rec_signo = *signo | RECURSION_BIT;
2327 
2328 #  if defined(VGP_x86_solaris)
2329    /* Register %ebx/%rbx points to the top of the original V stack. */
2330    sp = uc->uc_mcontext.gregs[VKI_EBX];
2331 #  elif defined(VGP_amd64_solaris)
2332    sp = uc->uc_mcontext.gregs[VKI_REG_RBX];
2333 #  else
2334 #    error "Unknown platform"
2335 #  endif
2336 
2337    /* Build a fake signal frame, similarly as in sigframe-solaris.c. */
2338    /* Calculate a new stack pointer. */
2339    sp -= sizeof(vki_sigframe_t);
2340    sp = VG_ROUNDDN(sp, 16) - sizeof(UWord);
2341 
2342    /* Fill in the frame. */
2343    frame = (vki_sigframe_t*)sp;
2344    /* Set a bogus return address. */
2345    frame->return_addr = (void*)~0UL;
2346    frame->a1_signo = rec_signo;
2347    /* The first parameter has to be 16-byte aligned, resembling a function
2348       call. */
2349    {
2350       /* Using
2351          vg_assert(VG_IS_16_ALIGNED(&frame->a1_signo));
2352          seems to get miscompiled on amd64 with GCC 4.7.2. */
2353       Addr signo_addr = (Addr)&frame->a1_signo;
2354       vg_assert(VG_IS_16_ALIGNED(signo_addr));
2355    }
2356    frame->a2_siginfo = &frame->siginfo;
2357    frame->siginfo = *info;
2358    frame->ucontext = *uc;
2359 
2360 #  if defined(VGP_x86_solaris)
2361    frame->a3_ucontext = &frame->ucontext;
2362 
2363    /* Switch onto the V stack and restart the signal processing. */
2364    __asm__ __volatile__(
2365       "xorl %%ebp, %%ebp\n"
2366       "movl %[sp], %%esp\n"
2367       "jmp async_signalhandler\n"
2368       :
2369       : [sp] "a" (sp)
2370       : /*"ebp"*/);
2371 
2372 #  elif defined(VGP_amd64_solaris)
2373    __asm__ __volatile__(
2374       "xorq %%rbp, %%rbp\n"
2375       "movq %[sp], %%rsp\n"
2376       "jmp async_signalhandler\n"
2377       :
2378       : [sp] "a" (sp), "D" (rec_signo), "S" (&frame->siginfo),
2379         "d" (&frame->ucontext)
2380       : /*"rbp"*/);
2381 #  else
2382 #    error "Unknown platform"
2383 #  endif
2384 
2385    /* We should never get here. */
2386    vg_assert(0);
2387 
2388 #  undef RECURSION_BIT
2389 }
2390 #endif
2391 
2392 /*
2393    Receive an async signal from the kernel.
2394 
2395    This should only happen when the thread is blocked in a syscall,
2396    since that's the only time this set of signals is unblocked.
2397 */
2398 static
async_signalhandler(Int sigNo,vki_siginfo_t * info,struct vki_ucontext * uc)2399 void async_signalhandler ( Int sigNo,
2400                            vki_siginfo_t *info, struct vki_ucontext *uc )
2401 {
2402    ThreadId     tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2403    ThreadState* tst = VG_(get_ThreadState)(tid);
2404    SysRes       sres;
2405 
2406    vg_assert(tst->status == VgTs_WaitSys);
2407 
2408 #  if defined(VGO_solaris)
2409    async_signalhandler_solaris_preprocess(tid, &sigNo, info, uc);
2410 #  endif
2411 
2412    /* The thread isn't currently running, make it so before going on */
2413    VG_(acquire_BigLock)(tid, "async_signalhandler");
2414 
2415    info->si_code = sanitize_si_code(info->si_code);
2416 
2417    if (VG_(clo_trace_signals))
2418       VG_(dmsg)("async signal handler: signal=%d, tid=%u, si_code=%d, "
2419                 "exitreason %s\n",
2420                 sigNo, tid, info->si_code,
2421                 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
2422 
2423    /* */
2424    if (tst->exitreason == VgSrc_FatalSig)
2425       resume_scheduler(tid);
2426 
2427    /* Update thread state properly.  The signal can only have been
2428       delivered whilst we were in
2429       coregrind/m_syswrap/syscall-<PLAT>.S, and only then in the
2430       window between the two sigprocmask calls, since at all other
2431       times, we run with async signals on the host blocked.  Hence
2432       make enquiries on the basis that we were in or very close to a
2433       syscall, and attempt to fix up the guest state accordingly.
2434 
2435       (normal async signals occurring during computation are blocked,
2436       but periodically polled for using VG_(sigtimedwait_zero), and
2437       delivered at a point convenient for us.  Hence this routine only
2438       deals with signals that are delivered to a thread during a
2439       syscall.) */
2440 
2441    /* First, extract a SysRes from the ucontext_t* given to this
2442       handler.  If it is subsequently established by
2443       VG_(fixup_guest_state_after_syscall_interrupted) that the
2444       syscall was complete but the results had not been committed yet
2445       to the guest state, then it'll have to commit the results itself
2446       "by hand", and so we need to extract the SysRes.  Of course if
2447       the thread was not in that particular window then the
2448       SysRes will be meaningless, but that's OK too because
2449       VG_(fixup_guest_state_after_syscall_interrupted) will detect
2450       that the thread was not in said window and ignore the SysRes. */
2451 
2452    /* To make matters more complex still, on Darwin we need to know
2453       the "class" of the syscall under consideration in order to be
2454       able to extract the a correct SysRes.  The class will have been
2455       saved just before the syscall, by VG_(client_syscall), into this
2456       thread's tst->arch.vex.guest_SC_CLASS.  Hence: */
2457 #  if defined(VGO_darwin)
2458    sres = VG_UCONTEXT_SYSCALL_SYSRES(uc, tst->arch.vex.guest_SC_CLASS);
2459 #  else
2460    sres = VG_UCONTEXT_SYSCALL_SYSRES(uc);
2461 #  endif
2462 
2463    /* (1) */
2464    VG_(fixup_guest_state_after_syscall_interrupted)(
2465       tid,
2466       VG_UCONTEXT_INSTR_PTR(uc),
2467       sres,
2468       !!(scss.scss_per_sig[sigNo].scss_flags & VKI_SA_RESTART),
2469       uc
2470    );
2471 
2472    /* (2) */
2473    /* Set up the thread's state to deliver a signal.
2474       However, if exitreason is VgSrc_FatalSig, then thread tid was
2475       taken out of a syscall by VG_(nuke_all_threads_except).
2476       But after the emission of VKI_SIGKILL, another (fatal) async
2477       signal might be sent. In such a case, we must not handle this
2478       signal, as the thread is supposed to die first.
2479       => resume the scheduler for such a thread, so that the scheduler
2480       can let the thread die. */
2481    if (tst->exitreason != VgSrc_FatalSig
2482        && !is_sig_ign(info, tid))
2483       deliver_signal(tid, info, uc);
2484 
2485    /* It's crucial that (1) and (2) happen in the order (1) then (2)
2486       and not the other way around.  (1) fixes up the guest thread
2487       state to reflect the fact that the syscall was interrupted --
2488       either to restart the syscall or to return EINTR.  (2) then sets
2489       up the thread state to deliver the signal.  Then we resume
2490       execution.  First, the signal handler is run, since that's the
2491       second adjustment we made to the thread state.  If that returns,
2492       then we resume at the guest state created by (1), viz, either
2493       the syscall returns EINTR or is restarted.
2494 
2495       If (2) was done before (1) the outcome would be completely
2496       different, and wrong. */
2497 
2498    /* longjmp back to the thread's main loop to start executing the
2499       handler. */
2500    resume_scheduler(tid);
2501 
2502    VG_(core_panic)("async_signalhandler: got unexpected signal "
2503                    "while outside of scheduler");
2504 }
2505 
2506 /* Extend the stack of thread #tid to cover addr. It is expected that
2507    addr either points into an already mapped anonymous segment or into a
2508    reservation segment abutting the stack segment. Everything else is a bug.
2509 
2510    Returns True on success, False on failure.
2511 
2512    Succeeds without doing anything if addr is already within a segment.
2513 
2514    Failure could be caused by:
2515    - addr not below a growable segment
2516    - new stack size would exceed the stack limit for the given thread
2517    - mmap failed for some other reason
2518 */
VG_(extend_stack)2519 Bool VG_(extend_stack)(ThreadId tid, Addr addr)
2520 {
2521    SizeT udelta;
2522    Addr new_stack_base;
2523 
2524    /* Get the segment containing addr. */
2525    const NSegment* seg = VG_(am_find_nsegment)(addr);
2526    vg_assert(seg != NULL);
2527 
2528    /* TODO: the test "seg->kind == SkAnonC" is really inadequate,
2529       because although it tests whether the segment is mapped
2530       _somehow_, it doesn't check that it has the right permissions
2531       (r,w, maybe x) ?  */
2532    if (seg->kind == SkAnonC)
2533       /* addr is already mapped.  Nothing to do. */
2534       return True;
2535 
2536    const NSegment* seg_next = VG_(am_next_nsegment)( seg, True/*fwds*/ );
2537    vg_assert(seg_next != NULL);
2538 
2539    udelta = VG_PGROUNDUP(seg_next->start - addr);
2540    new_stack_base = seg_next->start - udelta;
2541 
2542    VG_(debugLog)(1, "signals",
2543                  "extending a stack base 0x%lx down by %lu"
2544                  " new base 0x%lx to cover 0x%lx\n",
2545                  seg_next->start, udelta, new_stack_base, addr);
2546    Bool overflow;
2547    if (! VG_(am_extend_into_adjacent_reservation_client)
2548        ( seg_next->start, -(SSizeT)udelta, &overflow )) {
2549       if (overflow)
2550          VG_(umsg)("Stack overflow in thread #%u: can't grow stack to %#lx\n",
2551                    tid, new_stack_base);
2552       else
2553          VG_(umsg)("Cannot map memory to grow the stack for thread #%u "
2554                    "to %#lx\n", tid, new_stack_base);
2555       return False;
2556    }
2557 
2558    /* When we change the main stack, we have to let the stack handling
2559       code know about it. */
2560    VG_(change_stack)(VG_(clstk_id), new_stack_base, VG_(clstk_end));
2561 
2562    if (VG_(clo_sanity_level) > 2)
2563       VG_(sanity_check_general)(False);
2564 
2565    return True;
2566 }
2567 
2568 static fault_catcher_t fault_catcher = NULL;
2569 
VG_(set_fault_catcher)2570 fault_catcher_t VG_(set_fault_catcher)(fault_catcher_t catcher)
2571 {
2572    fault_catcher_t prev_catcher = fault_catcher;
2573    fault_catcher = catcher;
2574    return prev_catcher;
2575 }
2576 
2577 static
sync_signalhandler_from_user(ThreadId tid,Int sigNo,vki_siginfo_t * info,struct vki_ucontext * uc)2578 void sync_signalhandler_from_user ( ThreadId tid,
2579          Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
2580 {
2581    ThreadId qtid;
2582 
2583    /* If some user-process sent us a sync signal (ie. it's not the result
2584       of a faulting instruction), then how we treat it depends on when it
2585       arrives... */
2586 
2587    if (VG_(threads)[tid].status == VgTs_WaitSys
2588 #     if defined(VGO_solaris)
2589       /* Check if the signal was really received while doing a blocking
2590          syscall.  Only then the async_signalhandler() path can be used. */
2591        && VG_(is_ip_in_blocking_syscall)(tid, VG_UCONTEXT_INSTR_PTR(uc))
2592 #     endif
2593          ) {
2594       /* Signal arrived while we're blocked in a syscall.  This means that
2595          the client's signal mask was applied.  In other words, so we can't
2596          get here unless the client wants this signal right now.  This means
2597          we can simply use the async_signalhandler. */
2598       if (VG_(clo_trace_signals))
2599          VG_(dmsg)("Delivering user-sent sync signal %d as async signal\n",
2600                    sigNo);
2601 
2602       async_signalhandler(sigNo, info, uc);
2603       VG_(core_panic)("async_signalhandler returned!?\n");
2604 
2605    } else {
2606       /* Signal arrived while in generated client code, or while running
2607          Valgrind core code.  That means that every thread has these signals
2608          unblocked, so we can't rely on the kernel to route them properly, so
2609          we need to queue them manually. */
2610       if (VG_(clo_trace_signals))
2611          VG_(dmsg)("Routing user-sent sync signal %d via queue\n", sigNo);
2612 
2613 #     if defined(VGO_linux)
2614       /* On Linux, first we have to do a sanity check of the siginfo. */
2615       if (info->VKI_SIGINFO_si_pid == 0) {
2616          /* There's a per-user limit of pending siginfo signals.  If
2617             you exceed this, by having more than that number of
2618             pending signals with siginfo, then new signals are
2619             delivered without siginfo.  This condition can be caused
2620             by any unrelated program you're running at the same time
2621             as Valgrind, if it has a large number of pending siginfo
2622             signals which it isn't taking delivery of.
2623 
2624             Since we depend on siginfo to work out why we were sent a
2625             signal and what we should do about it, we really can't
2626             continue unless we get it. */
2627          VG_(umsg)("Signal %d (%s) appears to have lost its siginfo; "
2628                    "I can't go on.\n", sigNo, VG_(signame)(sigNo));
2629          VG_(printf)(
2630 "  This may be because one of your programs has consumed your ration of\n"
2631 "  siginfo structures.  For more information, see:\n"
2632 "    http://kerneltrap.org/mailarchive/1/message/25599/thread\n"
2633 "  Basically, some program on your system is building up a large queue of\n"
2634 "  pending signals, and this causes the siginfo data for other signals to\n"
2635 "  be dropped because it's exceeding a system limit.  However, Valgrind\n"
2636 "  absolutely needs siginfo for SIGSEGV.  A workaround is to track down the\n"
2637 "  offending program and avoid running it while using Valgrind, but there\n"
2638 "  is no easy way to do this.  Apparently the problem was fixed in kernel\n"
2639 "  2.6.12.\n");
2640 
2641          /* It's a fatal signal, so we force the default handler. */
2642          VG_(set_default_handler)(sigNo);
2643          deliver_signal(tid, info, uc);
2644          resume_scheduler(tid);
2645          VG_(exit)(99);       /* If we can't resume, then just exit */
2646       }
2647 #     endif
2648 
2649       qtid = 0;         /* shared pending by default */
2650 #     if defined(VGO_linux)
2651       if (info->si_code == VKI_SI_TKILL)
2652          qtid = tid;    /* directed to us specifically */
2653 #     endif
2654       queue_signal(qtid, info);
2655    }
2656 }
2657 
2658 /* Returns the reported fault address for an exact address */
fault_mask(Addr in)2659 static Addr fault_mask(Addr in)
2660 {
2661    /*  We have to use VG_PGROUNDDN because faults on s390x only deliver
2662        the page address but not the address within a page.
2663     */
2664 #  if defined(VGA_s390x)
2665    return VG_PGROUNDDN(in);
2666 #  else
2667    return in;
2668 #endif
2669 }
2670 
2671 /* Returns True if the sync signal was due to the stack requiring extension
2672    and the extension was successful.
2673 */
extend_stack_if_appropriate(ThreadId tid,vki_siginfo_t * info)2674 static Bool extend_stack_if_appropriate(ThreadId tid, vki_siginfo_t* info)
2675 {
2676    Addr fault;
2677    Addr esp;
2678    NSegment const *seg, *seg_next;
2679 
2680    if (info->si_signo != VKI_SIGSEGV)
2681       return False;
2682 
2683    fault    = (Addr)info->VKI_SIGINFO_si_addr;
2684    esp      = VG_(get_SP)(tid);
2685    seg      = VG_(am_find_nsegment)(fault);
2686    seg_next = seg ? VG_(am_next_nsegment)( seg, True/*fwds*/ )
2687                   : NULL;
2688 
2689    if (VG_(clo_trace_signals)) {
2690       if (seg == NULL)
2691          VG_(dmsg)("SIGSEGV: si_code=%d faultaddr=%#lx tid=%u ESP=%#lx "
2692                    "seg=NULL\n",
2693                    info->si_code, fault, tid, esp);
2694       else
2695          VG_(dmsg)("SIGSEGV: si_code=%d faultaddr=%#lx tid=%u ESP=%#lx "
2696                    "seg=%#lx-%#lx\n",
2697                    info->si_code, fault, tid, esp, seg->start, seg->end);
2698    }
2699 
2700    if (info->si_code == VKI_SEGV_MAPERR
2701        && seg
2702        && seg->kind == SkResvn
2703        && seg->smode == SmUpper
2704        && seg_next
2705        && seg_next->kind == SkAnonC
2706        && fault >= fault_mask(esp - VG_STACK_REDZONE_SZB)) {
2707       /* If the fault address is above esp but below the current known
2708          stack segment base, and it was a fault because there was
2709          nothing mapped there (as opposed to a permissions fault),
2710          then extend the stack segment.
2711        */
2712       Addr base = VG_PGROUNDDN(esp - VG_STACK_REDZONE_SZB);
2713       if (VG_(am_addr_is_in_extensible_client_stack)(base)
2714           && VG_(extend_stack)(tid, base)) {
2715          if (VG_(clo_trace_signals))
2716             VG_(dmsg)("       -> extended stack base to %#lx\n",
2717                       VG_PGROUNDDN(fault));
2718          return True;
2719       } else {
2720          return False;
2721       }
2722    } else {
2723       return False;
2724    }
2725 }
2726 
2727 static
sync_signalhandler_from_kernel(ThreadId tid,Int sigNo,vki_siginfo_t * info,struct vki_ucontext * uc)2728 void sync_signalhandler_from_kernel ( ThreadId tid,
2729          Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
2730 {
2731    /* Check to see if some part of Valgrind itself is interested in faults.
2732       The fault catcher should never be set whilst we're in generated code, so
2733       check for that.  AFAIK the only use of the catcher right now is
2734       memcheck's leak detector. */
2735    if (fault_catcher) {
2736       vg_assert(VG_(in_generated_code) == False);
2737 
2738       (*fault_catcher)(sigNo, (Addr)info->VKI_SIGINFO_si_addr);
2739       /* If the catcher returns, then it didn't handle the fault,
2740          so carry on panicking. */
2741    }
2742 
2743    if (extend_stack_if_appropriate(tid, info)) {
2744       /* Stack extension occurred, so we don't need to do anything else; upon
2745          returning from this function, we'll restart the host (hence guest)
2746          instruction. */
2747    } else {
2748       /* OK, this is a signal we really have to deal with.  If it came
2749          from the client's code, then we can jump back into the scheduler
2750          and have it delivered.  Otherwise it's a Valgrind bug. */
2751       ThreadState *tst = VG_(get_ThreadState)(tid);
2752 
2753       if (VG_(sigismember)(&tst->sig_mask, sigNo)) {
2754          /* signal is blocked, but they're not allowed to block faults */
2755          VG_(set_default_handler)(sigNo);
2756       }
2757 
2758       if (VG_(in_generated_code)) {
2759          if (VG_(gdbserver_report_signal) (info, tid)
2760              || VG_(sigismember)(&tst->sig_mask, sigNo)) {
2761             /* Can't continue; must longjmp back to the scheduler and thus
2762                enter the sighandler immediately. */
2763             deliver_signal(tid, info, uc);
2764             resume_scheduler(tid);
2765          }
2766          else
2767             resume_scheduler(tid);
2768       }
2769 
2770       /* If resume_scheduler returns or its our fault, it means we
2771          don't have longjmp set up, implying that we weren't running
2772          client code, and therefore it was actually generated by
2773          Valgrind internally.
2774        */
2775       VG_(dmsg)("VALGRIND INTERNAL ERROR: Valgrind received "
2776                 "a signal %d (%s) - exiting\n",
2777                 sigNo, VG_(signame)(sigNo));
2778 
2779       VG_(dmsg)("si_code=%d;  Faulting address: %p;  sp: %#lx\n",
2780                 info->si_code, info->VKI_SIGINFO_si_addr,
2781                 VG_UCONTEXT_STACK_PTR(uc));
2782 
2783       if (0)
2784          VG_(kill_self)(sigNo);  /* generate a core dump */
2785 
2786       //if (tid == 0)            /* could happen after everyone has exited */
2787       //  tid = VG_(master_tid);
2788       vg_assert(tid != 0);
2789 
2790       UnwindStartRegs startRegs;
2791       VG_(memset)(&startRegs, 0, sizeof(startRegs));
2792 
2793       VG_UCONTEXT_TO_UnwindStartRegs(&startRegs, uc);
2794       VG_(core_panic_at)("Killed by fatal signal", &startRegs);
2795    }
2796 }
2797 
2798 /*
2799    Receive a sync signal from the host.
2800 */
2801 static
sync_signalhandler(Int sigNo,vki_siginfo_t * info,struct vki_ucontext * uc)2802 void sync_signalhandler ( Int sigNo,
2803                           vki_siginfo_t *info, struct vki_ucontext *uc )
2804 {
2805    ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2806    Bool from_user;
2807 
2808    if (0)
2809       VG_(printf)("sync_sighandler(%d, %p, %p)\n", sigNo, info, uc);
2810 
2811    vg_assert(info != NULL);
2812    vg_assert(info->si_signo == sigNo);
2813    vg_assert(sigNo == VKI_SIGSEGV
2814 	     || sigNo == VKI_SIGBUS
2815 	     || sigNo == VKI_SIGFPE
2816 	     || sigNo == VKI_SIGILL
2817 	     || sigNo == VKI_SIGTRAP);
2818 
2819    info->si_code = sanitize_si_code(info->si_code);
2820 
2821    from_user = !is_signal_from_kernel(tid, sigNo, info->si_code);
2822 
2823    if (VG_(clo_trace_signals)) {
2824       VG_(dmsg)("sync signal handler: "
2825                 "signal=%d, si_code=%d, EIP=%#lx, eip=%#lx, from %s\n",
2826                 sigNo, info->si_code, VG_(get_IP)(tid),
2827                 VG_UCONTEXT_INSTR_PTR(uc),
2828                 ( from_user ? "user" : "kernel" ));
2829    }
2830    vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
2831 
2832    /* // debug code:
2833    if (0) {
2834       VG_(printf)("info->si_signo  %d\n", info->si_signo);
2835       VG_(printf)("info->si_errno  %d\n", info->si_errno);
2836       VG_(printf)("info->si_code   %d\n", info->si_code);
2837       VG_(printf)("info->si_pid    %d\n", info->si_pid);
2838       VG_(printf)("info->si_uid    %d\n", info->si_uid);
2839       VG_(printf)("info->si_status %d\n", info->si_status);
2840       VG_(printf)("info->si_addr   %p\n", info->si_addr);
2841    }
2842    */
2843 
2844    /* Figure out if the signal is being sent from outside the process.
2845       (Why do we care?)  If the signal is from the user rather than the
2846       kernel, then treat it more like an async signal than a sync signal --
2847       that is, merely queue it for later delivery. */
2848    if (from_user) {
2849       sync_signalhandler_from_user(  tid, sigNo, info, uc);
2850    } else {
2851       sync_signalhandler_from_kernel(tid, sigNo, info, uc);
2852    }
2853 
2854 #  if defined(VGO_solaris)
2855    /* On Solaris we have to return from signal handler manually. */
2856    VG_(do_syscall2)(__NR_context, VKI_SETCONTEXT, (UWord)uc);
2857 #  endif
2858 }
2859 
2860 
2861 /*
2862    Kill this thread.  Makes it leave any syscall it might be currently
2863    blocked in, and return to the scheduler.  This doesn't mark the thread
2864    as exiting; that's the caller's job.
2865  */
sigvgkill_handler(int signo,vki_siginfo_t * si,struct vki_ucontext * uc)2866 static void sigvgkill_handler(int signo, vki_siginfo_t *si,
2867                                          struct vki_ucontext *uc)
2868 {
2869    ThreadId     tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2870    ThreadStatus at_signal = VG_(threads)[tid].status;
2871 
2872    if (VG_(clo_trace_signals))
2873       VG_(dmsg)("sigvgkill for lwp %d tid %u\n", VG_(gettid)(), tid);
2874 
2875    VG_(acquire_BigLock)(tid, "sigvgkill_handler");
2876 
2877    vg_assert(signo == VG_SIGVGKILL);
2878    vg_assert(si->si_signo == signo);
2879 
2880    /* jrs 2006 August 3: the following assertion seems incorrect to
2881       me, and fails on AIX.  sigvgkill could be sent to a thread which
2882       is runnable - see VG_(nuke_all_threads_except) in the scheduler.
2883       Hence comment these out ..
2884 
2885       vg_assert(VG_(threads)[tid].status == VgTs_WaitSys);
2886       VG_(post_syscall)(tid);
2887 
2888       and instead do:
2889    */
2890    if (at_signal == VgTs_WaitSys)
2891       VG_(post_syscall)(tid);
2892    /* jrs 2006 August 3 ends */
2893 
2894    resume_scheduler(tid);
2895 
2896    VG_(core_panic)("sigvgkill_handler couldn't return to the scheduler\n");
2897 }
2898 
2899 static __attribute((unused))
pp_ksigaction(vki_sigaction_toK_t * sa)2900 void pp_ksigaction ( vki_sigaction_toK_t* sa )
2901 {
2902    Int i;
2903    VG_(printf)("pp_ksigaction: handler %p, flags 0x%x, restorer %p\n",
2904                sa->ksa_handler,
2905                (UInt)sa->sa_flags,
2906 #              if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
2907                   !defined(VGO_solaris)
2908                   sa->sa_restorer
2909 #              else
2910                   (void*)0
2911 #              endif
2912               );
2913    VG_(printf)("pp_ksigaction: { ");
2914    for (i = 1; i <= VG_(max_signal); i++)
2915       if (VG_(sigismember)(&(sa->sa_mask),i))
2916          VG_(printf)("%d ", i);
2917    VG_(printf)("}\n");
2918 }
2919 
2920 /*
2921    Force signal handler to default
2922  */
VG_(set_default_handler)2923 void VG_(set_default_handler)(Int signo)
2924 {
2925    vki_sigaction_toK_t sa;
2926 
2927    sa.ksa_handler = VKI_SIG_DFL;
2928    sa.sa_flags = 0;
2929 #  if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
2930       !defined(VGO_solaris)
2931    sa.sa_restorer = 0;
2932 #  endif
2933    VG_(sigemptyset)(&sa.sa_mask);
2934 
2935    VG_(do_sys_sigaction)(signo, &sa, NULL);
2936 }
2937 
2938 /*
2939    Poll for pending signals, and set the next one up for delivery.
2940  */
VG_(poll_signals)2941 void VG_(poll_signals)(ThreadId tid)
2942 {
2943    vki_siginfo_t si, *sip;
2944    vki_sigset_t pollset;
2945    ThreadState *tst = VG_(get_ThreadState)(tid);
2946    vki_sigset_t saved_mask;
2947 
2948    if (tst->exitreason == VgSrc_FatalSig) {
2949       /* This task has been requested to die due to a fatal signal
2950          received by the process. So, we cannot poll new signals,
2951          as we are supposed to die asap. If we would poll and deliver
2952          a new (maybe fatal) signal, this could cause a deadlock, as
2953          this thread would believe it has to terminate the other threads
2954          and wait for them to die, while we already have a thread doing
2955          that. */
2956       if (VG_(clo_trace_signals))
2957          VG_(dmsg)("poll_signals: not polling as thread %u is exitreason %s\n",
2958                    tid, VG_(name_of_VgSchedReturnCode)(tst->exitreason));
2959       return;
2960    }
2961 
2962    /* look for all the signals this thread isn't blocking */
2963    /* pollset = ~tst->sig_mask */
2964    VG_(sigcomplementset)( &pollset, &tst->sig_mask );
2965 
2966    block_all_host_signals(&saved_mask); // protect signal queue
2967 
2968    /* First look for any queued pending signals */
2969    sip = next_queued(tid, &pollset); /* this thread */
2970 
2971    if (sip == NULL)
2972       sip = next_queued(0, &pollset); /* process-wide */
2973 
2974    /* If there was nothing queued, ask the kernel for a pending signal */
2975    if (sip == NULL && VG_(sigtimedwait_zero)(&pollset, &si) > 0) {
2976       if (VG_(clo_trace_signals))
2977          VG_(dmsg)("poll_signals: got signal %d for thread %u exitreason %s\n",
2978                    si.si_signo, tid,
2979                    VG_(name_of_VgSchedReturnCode)(tst->exitreason));
2980       sip = &si;
2981    }
2982 
2983    if (sip != NULL) {
2984       /* OK, something to do; deliver it */
2985       if (VG_(clo_trace_signals))
2986          VG_(dmsg)("Polling found signal %d for tid %u exitreason %s\n",
2987                    sip->si_signo, tid,
2988                    VG_(name_of_VgSchedReturnCode)(tst->exitreason));
2989       if (!is_sig_ign(sip, tid))
2990 	 deliver_signal(tid, sip, NULL);
2991       else if (VG_(clo_trace_signals))
2992          VG_(dmsg)("   signal %d ignored\n", sip->si_signo);
2993 
2994       sip->si_signo = 0;	/* remove from signal queue, if that's
2995 				   where it came from */
2996    }
2997 
2998    restore_all_host_signals(&saved_mask);
2999 }
3000 
3001 /* At startup, copy the process' real signal state to the SCSS.
3002    Whilst doing this, block all real signals.  Then calculate SKSS and
3003    set the kernel to that.  Also initialise DCSS.
3004 */
VG_(sigstartup_actions)3005 void VG_(sigstartup_actions) ( void )
3006 {
3007    Int i, ret, vKI_SIGRTMIN;
3008    vki_sigset_t saved_procmask;
3009    vki_sigaction_fromK_t sa;
3010 
3011    VG_(memset)(&scss, 0, sizeof(scss));
3012    VG_(memset)(&skss, 0, sizeof(skss));
3013 
3014 #  if defined(VKI_SIGRTMIN)
3015    vKI_SIGRTMIN = VKI_SIGRTMIN;
3016 #  else
3017    vKI_SIGRTMIN = 0; /* eg Darwin */
3018 #  endif
3019 
3020    /* VG_(printf)("SIGSTARTUP\n"); */
3021    /* Block all signals.  saved_procmask remembers the previous mask,
3022       which the first thread inherits.
3023    */
3024    block_all_host_signals( &saved_procmask );
3025 
3026    /* Copy per-signal settings to SCSS. */
3027    for (i = 1; i <= _VKI_NSIG; i++) {
3028       /* Get the old host action */
3029       ret = VG_(sigaction)(i, NULL, &sa);
3030 
3031 #     if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
3032       /* apparently we may not even ask about the disposition of these
3033          signals, let alone change them */
3034       if (ret != 0 && (i == VKI_SIGKILL || i == VKI_SIGSTOP))
3035          continue;
3036 #     endif
3037 
3038       if (ret != 0)
3039 	 break;
3040 
3041       /* Try setting it back to see if this signal is really
3042 	 available */
3043       if (vKI_SIGRTMIN > 0 /* it actually exists on this platform */
3044           && i >= vKI_SIGRTMIN) {
3045          vki_sigaction_toK_t tsa, sa2;
3046 
3047 	 tsa.ksa_handler = (void *)sync_signalhandler;
3048 	 tsa.sa_flags = VKI_SA_SIGINFO;
3049 #        if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
3050             !defined(VGO_solaris)
3051 	 tsa.sa_restorer = 0;
3052 #        endif
3053 	 VG_(sigfillset)(&tsa.sa_mask);
3054 
3055 	 /* try setting it to some arbitrary handler */
3056 	 if (VG_(sigaction)(i, &tsa, NULL) != 0) {
3057 	    /* failed - not really usable */
3058 	    break;
3059 	 }
3060 
3061          VG_(convert_sigaction_fromK_to_toK)( &sa, &sa2 );
3062 	 ret = VG_(sigaction)(i, &sa2, NULL);
3063 	 vg_assert(ret == 0);
3064       }
3065 
3066       VG_(max_signal) = i;
3067 
3068       if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
3069          VG_(printf)("snaffling handler 0x%lx for signal %d\n",
3070                      (Addr)(sa.ksa_handler), i );
3071 
3072       scss.scss_per_sig[i].scss_handler  = sa.ksa_handler;
3073       scss.scss_per_sig[i].scss_flags    = sa.sa_flags;
3074       scss.scss_per_sig[i].scss_mask     = sa.sa_mask;
3075 
3076       scss.scss_per_sig[i].scss_restorer = NULL;
3077 #     if !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
3078          !defined(VGO_solaris)
3079       scss.scss_per_sig[i].scss_restorer = sa.sa_restorer;
3080 #     endif
3081 
3082       scss.scss_per_sig[i].scss_sa_tramp = NULL;
3083 #     if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
3084       scss.scss_per_sig[i].scss_sa_tramp = NULL;
3085       /*sa.sa_tramp;*/
3086       /* We can't know what it was, because Darwin's sys_sigaction
3087          doesn't tell us. */
3088 #     endif
3089    }
3090 
3091    if (VG_(clo_trace_signals))
3092       VG_(dmsg)("Max kernel-supported signal is %d, VG_SIGVGKILL is %d\n",
3093                 VG_(max_signal), VG_SIGVGKILL);
3094 
3095    /* Our private internal signals are treated as ignored */
3096    scss.scss_per_sig[VG_SIGVGKILL].scss_handler = VKI_SIG_IGN;
3097    scss.scss_per_sig[VG_SIGVGKILL].scss_flags   = VKI_SA_SIGINFO;
3098    VG_(sigfillset)(&scss.scss_per_sig[VG_SIGVGKILL].scss_mask);
3099 
3100    /* Copy the process' signal mask into the root thread. */
3101    vg_assert(VG_(threads)[1].status == VgTs_Init);
3102    for (i = 2; i < VG_N_THREADS; i++)
3103       vg_assert(VG_(threads)[i].status == VgTs_Empty);
3104 
3105    VG_(threads)[1].sig_mask = saved_procmask;
3106    VG_(threads)[1].tmp_sig_mask = saved_procmask;
3107 
3108    /* Calculate SKSS and apply it.  This also sets the initial kernel
3109       mask we need to run with. */
3110    handle_SCSS_change( True /* forced update */ );
3111 
3112    /* Leave with all signals still blocked; the thread scheduler loop
3113       will set the appropriate mask at the appropriate time. */
3114 }
3115 
3116 /*--------------------------------------------------------------------*/
3117 /*--- end                                                          ---*/
3118 /*--------------------------------------------------------------------*/
3119