Searched refs:w_1 (Results 1 – 3 of 3) sorted by relevance
643 const __m128i w_1 = _mm_srai_epi32(_mm_add_epi32(v_1, rounding), in apply_selfguided_restoration_sse4_1() local648 const __m128i tmp = _mm_packus_epi32(w_0, w_1); in apply_selfguided_restoration_sse4_1()654 const __m128i tmp = _mm_packs_epi32(w_0, w_1); in apply_selfguided_restoration_sse4_1()
698 const __m256i w_1 = _mm256_srai_epi32( in apply_selfguided_restoration_avx2() local705 const __m256i tmp = _mm256_packus_epi32(w_0, w_1); in apply_selfguided_restoration_avx2()714 const __m256i tmp = _mm256_packs_epi32(w_0, w_1); in apply_selfguided_restoration_avx2()
35 …The function it tries to compute is the best $w_1$ and $w_2$ it can find for the function $y = w_2…492 …uivalent to the function $\\hat{y} = w_2 x + w_1$. What we're trying to do is find the \"best\" we…506 …"What gradient descent does is start with random weights for $\\hat{y} = w_2 x + w_1$ and graduall…812 …w_1$ and $w_2$, the same thing as what `gradients(loss, weights)` does in the earlier code. Not su…