Home
last modified time | relevance | path

Searched refs:w_1 (Results 1 – 3 of 3) sorted by relevance

/external/libaom/libaom/av1/common/x86/
Dselfguided_sse4.c643 const __m128i w_1 = _mm_srai_epi32(_mm_add_epi32(v_1, rounding), in apply_selfguided_restoration_sse4_1() local
648 const __m128i tmp = _mm_packus_epi32(w_0, w_1); in apply_selfguided_restoration_sse4_1()
654 const __m128i tmp = _mm_packs_epi32(w_0, w_1); in apply_selfguided_restoration_sse4_1()
Dselfguided_avx2.c698 const __m256i w_1 = _mm256_srai_epi32( in apply_selfguided_restoration_avx2() local
705 const __m256i tmp = _mm256_packus_epi32(w_0, w_1); in apply_selfguided_restoration_avx2()
714 const __m256i tmp = _mm256_packs_epi32(w_0, w_1); in apply_selfguided_restoration_avx2()
/external/tensorflow/tensorflow/tools/docker/notebooks/
D2_getting_started.ipynb35 …The function it tries to compute is the best $w_1$ and $w_2$ it can find for the function $y = w_2…
492 …uivalent to the function $\\hat{y} = w_2 x + w_1$. What we're trying to do is find the \"best\" we…
506 …"What gradient descent does is start with random weights for $\\hat{y} = w_2 x + w_1$ and graduall…
812w_1$ and $w_2$, the same thing as what `gradients(loss, weights)` does in the earlier code. Not su…