API Reference

This is the API reference for the SmartTraffic-RL project.

class smart_traffic_env.env.UrbanTrafficEnv(num_intersections: int = 4, lanes_per_intersection: int = 2, base_green: float = 30.0, delta_max: float = 5.0, control_interval: float = 60.0, episode_length: int = 60, demand_profile: ndarray | None = None, seed: int = None)[source]

Gym-style env for urban traffic signal control.

close()[source]

Clean up any resources.

render(mode: str = 'human')[source]

Prints a one-line summary of the current state.

reset(seed: int | None = None, options: dict | None = None) Tuple[ndarray, dict][source]

Resets the environment to an initial internal state, returning an initial observation and info.

This method generates a new starting state often with some randomness to ensure that the agent explores the state space and learns a generalised policy about the environment. This randomness can be controlled with the seed parameter otherwise if the environment already has a random number generator and reset() is called with seed=None, the RNG is not reset.

Therefore, reset() should (in the typical use case) be called with a seed right after initialization and then never again.

For Custom environments, the first line of reset() should be super().reset(seed=seed) which implements the seeding correctly.

Changed in version v0.25: The return_info parameter was removed and now info is expected to be returned.

Parameters:
  • seed (optional int) – The seed that is used to initialize the environment’s PRNG (np_random) and the read-only attribute np_random_seed. If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset and the env’s np_random_seed will not be altered. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.

  • options (optional dict) – Additional information to specify how the environment is reset (optional, depending on the specific environment)

Returns:

Observation of the initial state. This will be an element of observation_space

(typically a numpy array) and is analogous to the observation returned by step().

info (dictionary): This dictionary contains auxiliary information complementing observation. It should be analogous to

the info returned by step().

Return type:

observation (ObsType)

step(action: ndarray) Tuple[ndarray, float, bool, bool, dict][source]

Executes one time step within the environment.

smart_traffic_env.flow_model.compute_reward(queues: ndarray) float[source]

Computes the reward for the current state. The reward is the negative average queue length, which is a common metric in traffic signal control.

Parameters:

queues – Current queue lengths for each lane [M,].

Returns:

Scalar reward value.

smart_traffic_env.flow_model.compute_service_rate(greens: ndarray, num_lanes: int, all_red_time: float, saturation_flow: float) ndarray[source]

Computes the service rate for each lane based on green times.

Parameters:
  • greens – Current green times for each intersection [N,].

  • num_lanes – Number of lanes per intersection.

  • all_red_time – All-red time in seconds.

  • saturation_flow – Saturation flow rate in veh/s.

Returns:

Service rate for each lane [M,].

class smart_traffic_env.demand.DemandGenerator(num_steps: int, num_lanes: int, rng: RandomState, base_demand: float = 1000.0, period: int = 30, amplitude: float = 40.0, noise_std: float = 10.0)[source]

Generates a synthetic demand trajectory for the traffic network. The demand is modeled as a noisy sinusoid.

generate() ndarray[source]

Generates the demand trajectory.

Returns:

A numpy array of shape (num_steps, num_lanes) representing the number of vehicles arriving per second.

class smart_traffic_env.utils.MetricsLogger[source]

Logs time-series data for an episode, including queues, actions, and rewards.

end_episode()[source]

Aggregates and stores the logs for the completed episode.

log_step(queues, action, reward)[source]

Records the metrics for a single simulation step.

Parameters:
  • queues (np.ndarray) – The current queue lengths.

  • action (np.ndarray) – The action taken by the agent.

  • reward (float) – The reward received.

save(filepath: str)[source]

Saves the logged data to a JSON file.

Parameters:

filepath – The path to the output file.

class smart_traffic_env.wrappers.NormalizeObservation(env)[source]

Normalizes the observation space to the range [-1, 1].

observation(obs)[source]

Returns a modified observation.

Parameters:

observation – The env observation

Returns:

The modified observation

class smart_traffic_env.wrappers.ScaleReward(env, scale_factor: float)[source]

Scales the reward by a constant factor.

reward(rew)[source]

Returns a modified environment reward.

Parameters:

reward – The env step() reward

Returns:

The modified reward