• Home
  • Raw
  • Download

Lines Matching refs:your

14 We actively welcome your pull requests.
16 1. Fork the repo and create your branch from `dev`.
20 5. Make sure your code lints.
24 In order to accept your pull request, we need you to submit a CLA. You only need
27 Complete your CLA here: <https://code.facebook.com/cla>
37 * Checkout your fork of zstd if you have not already
42 * Update your local dev branch
48 * Make a new branch on your fork about the topic you're developing for
60 * Note: run local tests to ensure that your changes didn't break existing functionality
71 … * Before sharing anything to the community, make sure that all CI tests pass on your local fork.
72 See our section on setting up your CI environment for more information on how to do this.
73 … * Ensure that static analysis passes on your development machine. See the Static Analysis section
76 … * When you are ready to share you changes to the community, create a pull request from your branch
77 … to facebook:dev. You can do this very easily by clicking 'Create Pull Request' on your fork's home
79 … * From there, select the branch where you made changes as your source branch and facebook:dev
91 …* Note: if you have been working with a specific user and would like them to review your work, mak…
95 … * You will have to iterate on your changes with feedback from other collaborators to reach a point
96 where your pull request can be safely merged.
99 …* Eventually, someone from the zstd team will approve your pull request and not long after merge i…
102 … * Most PRs are linked with one or more Github issues. If this is the case for your PR, make sure
103 the corresponding issue is mentioned. If your change 'fixes' or completely addresses the
105 …* Just because your changes have been merged does not mean the topic or larger issue is complete. …
108 … their change makes it to the next release of zstd. Users will often discover bugs in your code or
109 … suggest ways to refine and improve your initial changes even after the pull request is merged.
114 static analysis. You can install it by following the instructions for your OS on https://clang-anal…
116 Once installed, you can ensure that our static analysis tests pass on your local development machine
152 very well documented via past Github issues and pull requests. It may be the case that your
154 time to search through old issues and pull requests using keywords specific to your
162 benchmarks on your end before submitting a PR. Of course, you will not be able to benchmark
163 your changes on every single processor and os out there (and neither will we) but do that best
178 benchmarking machine. A virtual machine, a machine with shared resources, or your laptop
179 will typically not be stable enough to obtain reliable benchmark results. If you can get your
184 noise. Here are some things you can do to make your benchmarks more stable:
186 1. The most simple thing you can do to drastically improve the stability of your benchmark is
191 * How you aggregate your samples are important. You might be tempted to use the mean of your
194 outliers whereas the median is. Better still, you could simply take the fastest speed your
195 benchmark achieved on each run since that is likely the fastest your process will be
196 capable of running your code. In our experience, this (aggregating by just taking the sample
198 * The more samples you have, the more stable your benchmarks should be. You can verify
199 your improved stability by looking at the size of your confidence intervals as you
200 increase your sample count. These should get smaller and smaller. Eventually hopefully
205 address is directly by simply not including the first `n` iterations of your benchmark in
206 your aggregations. You can determine `n` by simply looking at the results from each iteration
208 2. You cannot really get reliable benchmarks if your host machine is simultaneously running
209 another cpu/memory-intensive application in the background. If you are running benchmarks on your
210 personal laptop for instance, you should close all applications (including your code editor and
211 browser) before running your benchmarks. You might also have invisible background applications
214 * If you have multiple cores, you can even run your benchmark on a reserved core to prevent
216 on your OS:
221 Dynamically linking your library will introduce some added variation (not a large amount but
233 The fastest signal you can get regarding your performance changes is via the in-build zstd cli
234 bench option. You can run Zstd as you typically would for your scenario using some set of options
242 specify a running time for your benchmark in seconds (default is 3 seconds).
243 Usually, the longer the running time, the more stable your results will be.
246 $ git checkout <commit-before-your-change>
248 $ git checkout <commit-after-your-change>
250 $ zstd-old -i5 -b1 <your-test-data>
251 1<your-test-data> : 8990 -> 3992 (2.252), 302.6 MB/s , 626.4 MB/s
252 $ zstd-new -i5 -b1 <your-test-data>
253 1<your-test-data> : 8990 -> 3992 (2.252), 302.8 MB/s , 628.4 MB/s
256 Unless your performance win is large enough to be visible despite the intrinsic noise
257 on your computer, benchzstd alone will likely not be enough to validate the impact of your
265 profile your code using `instruments` on mac, `perf` on linux and `visual studio profiler`
275 Profilers will let you see how much time your code spends inside a particular function.
276 If your target code snippit is only part of a function, it might be worth trying to
281 functions for you. Your goal will be to find your function of interest in this call grapch
287 whose performance can be improved upon. Follow these steps to profile your code using
292 3. Close all other applications except for your instruments window and your terminal
293 4. Run your benchmarking script from your terminal window
296 and you will have ample time to attach your profiler to this process:)
301 5. Once you run your benchmarking script, switch back over to instruments and attach your
304 * Selecting your process from the dropdown. In my case, it is just going to be labled
307 6. You profiler will now start collecting metrics from your bencharking script. Once
309 recording), stop your profiler.
311 8. You should be able to see your call graph.
313 zstd and your benchmarking scripg using debug flags. On mac and linux, this just means
314 you will have to supply the `-g` flag alone with your build script. You might also
316 9. Dig down the graph to find your function call and then inspect it by double clicking
330 of the first things your team will run when assessing your PR.
333 counters you expect to be impacted by your change are in fact being so. For example,
334 if you expect the L1 cache misses to decrease with your change, you can look at the
343 ## Setting up continuous integration (CI) on your fork
349 The easiest way to run these CI tests on your own before submitting a PR to our dev branch
350 is to configure your personal fork of zstd with each of the CI platforms. Below, you'll find
354 Follow these steps to link travis-ci with your github fork of zstd
356 1. Make sure you are logged into your github account
361 6. Select 'Only select repositories' and select your fork of zstd from the drop down
363 …hub' again. This time, it will be for travis-pro (which will let you view your tests on the web da…
365 10. You should have travis set up on your fork now.
371 Follow these steps to link circle-ci with your girhub fork of zstd
373 1. Make sure you are logged into your github account
378 …select which repositories you want to give appveyor permission to. Select your fork of zstd if you…
379 7. You should have appveyor set up on your fork now.
394 We use GitHub issues to track public bugs. Please ensure your description is
405 By contributing to Zstandard, you agree that your contributions will be licensed