Consider the following Python code snippet,

```
import numpy as np
const = 256.
x = [i for i in range(1, 100000)]
z = np.zeros(len(x))
for i, element in enumerate(x):
z[i] = np.sqrt(const) * x[i]
```

What can you do to the above code without changing the computation results at any precision digit to make the code more performant?

Prove your suggestion by benchmarking your revised code against the original code above.

How much performance improvement do you observe?