One way would be to provide a uniform or compile-time constant to the shader so that it knows how big to make the blur. You can pass in a BLUR_SAMPLES constant as a preprocessor define, then use that as the loop size for the blur kernel. In my code I have a few different fixed kernel sizes (9, 17, 33 px), and choose between those such that kernel size is ≥ BLUR_SAMPLES.
Beyond that, you can do multiple passes to implement a blur kernel of any size using a fixed-size kernel. Two Gaussian blurs in series of kernel size sigma1 and sigma2 have sigma12 = sqrt( sigma1*sigma1 + sigma2*sigma2 ). I determine the number of passes like this:
int passCount = max( (int)ceiling( square(radius) / square(baseKernelSize) ), 1 );
On each pass the kernel radius is:
float totalRadius = 0.0f;
for ( int i = 0; i < passCount; i++ ) {
float passRadius = min( baseKernelSize, sqrt( max( square(radius) - square(totalRadius), 0.0f ) ) );
// do blur using previous pass result as input
totalRadius = sqrt( square(totalRadius) + square(passRadius) );
}
On the last pass you need to reduce the size of the kernel (using uniform scale factor in shader) so that you get the right kernel size.