Core Issue
For neps.Float
parameters with log=True
, when decoding from tensors at the upper bound (1.0
) with default dtype torch.float64
to configs using neps.space.encoding.MinMaxNormalizer.decode
, in neps.space.Domain.from_unit
, the torch.exp(x)
causes an overflow after a couple of decimal points.
This error doesn't occur with torch.float32
as the explicit dtype, therefore is likely just a floating point precision issue.
Code to reproduce
import torch
import neps
from neps.space.encoding import ConfigEncoder
from neps.space.parsing import convert_to_space
sample_space = {
"a": neps.Float(0.0001, 0.1, log=True),
"b": neps.Float(0.1, 1.0, log=True),
"c": neps.Float(0.003, 0.1, log=True),
"d": neps.Integer(1, 10, log=True),
}
neps_space = convert_to_space(sample_space)
encoder = ConfigEncoder.from_parameters(
neps_space.searchables
)
config_tensor = torch.tensor([1.0000, 0.9000, 0.9999, 1.0000], dtype=torch.float64)
decoded_config = encoder.decode_one(config_tensor)
print(decoded_config)
In the above code, the decoded config for parameter a
is 0.10000000000000006
, which overflows by 6E-17