Skip to content

ConfigEncoder decoding out of bounds for log-scale floating point parameter values, when sampled at the param upper bound #214

@Sohambasu07

Description

@Sohambasu07

Core Issue

For neps.Float parameters with log=True, when decoding from tensors at the upper bound (1.0) with default dtype torch.float64 to configs using neps.space.encoding.MinMaxNormalizer.decode, in neps.space.Domain.from_unit, the torch.exp(x) causes an overflow after a couple of decimal points.

This error doesn't occur with torch.float32 as the explicit dtype, therefore is likely just a floating point precision issue.

Code to reproduce

import torch

import neps
from neps.space.encoding import ConfigEncoder
from neps.space.parsing import convert_to_space

sample_space = {
    "a": neps.Float(0.0001, 0.1, log=True),
    "b": neps.Float(0.1, 1.0, log=True),
    "c": neps.Float(0.003, 0.1, log=True),
    "d": neps.Integer(1, 10, log=True),
}

neps_space = convert_to_space(sample_space)

encoder = ConfigEncoder.from_parameters(
    neps_space.searchables
)

config_tensor = torch.tensor([1.0000, 0.9000, 0.9999, 1.0000], dtype=torch.float64)
decoded_config = encoder.decode_one(config_tensor)
print(decoded_config)

In the above code, the decoded config for parameter a is 0.10000000000000006, which overflows by 6E-17

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions