Tags: openai/codex
Tags
Fix CLA link in workflow (#964) ## Summary - fix the CLA link posted by the bot - docs suggest using an absolute URL: https://github.com/marketplace/actions/cla-assistant-lite
fix: artifacts from previous frames were bleeding through in TUI (#989) Prior to this PR, I would frequently see glyphs from previous frames "bleed" through like this:  I think this was due to two issues (now addressed in this PR): * We were not making use of `ratatui::widgets::Clear` to clear out the buffer before drawing into it. * To calculate the `width` used with `wrapped_line_count_for_cell()`, we were not accounting for the scrollbar. * Now we calculate `effective_width` using `inner.width.saturating_sub(1)` where the `1` is for the scrollbar. * We compute `text_area` using `effective_with` and pass the `text_area` to `paragraph.render()`. * We eliminate the conditional `needs_scrollbar` check and always call `render(Scrollbar)` I suspect this bug was introduced in #937, though I did not try to verify: I'm just happy that it appears to be fixed!
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
fix: properly wrap lines in the Rust TUI (#937) As discussed on 699ec5a#commitcomment-156776835, to properly support scrolling long content in Ratatui for a sequence of cells, we need to: * take the `Vec<Line>` for each cell * using the wrapping logic we want to use at render time, compute the _effective line count_ using `Paragraph::line_count()` (see `wrapped_line_count_for_cell()` in this PR) * sum up the effective line count to compute the height of the area being scrolled * given a `scroll_position: usize`, index into the list of "effective lines" and accumulate the appropriate `Vec<Line>` for the cells that should be displayed * take that `Vec<Line>` to create a `Paragraph` and use the same line-wrapping policy that was used in `wrapped_line_count_for_cell()` * display the resulting `Paragraph` and use the accounting to display a scrollbar with the appropriate thumb size and offset without having to render the `Vec<Line>` for the full history With this change, lines wrap as I expect and everything appears to redraw correctly as I resize my terminal!
chore: handle all cases for EventMsg (#936) For now, this removes the `#[non_exhaustive]` directive on `EventMsg` so that we are forced to handle all `EventMsg` by default. (We may revisit this if/when we publish `core/` as a `lib` crate.) For now, it is helpful to have this as a forcing function because we have effectively two UIs (`tui` and `exec`) and usually when we add a new variant to `EventMsg`, we want to be sure that we update both.
PreviousNext