How to improve the Python import time of your Typer CLI
View the series
- See how I used the OpenAI API to generate audio and images
- See why structured outputs also need hard guardrails
- Grab ready-to-use pytest snippets mocking the OpenAI API
- Add context to OpenAI API error messages for easier debugging
- Learn how to log OpenAI API calls in JSON format
- Learn how I parametrized tests and generated test data with GPT-5.2
- Cut down your Python import time with this 6-step method
When the execution time of your Python CLI is dominated by LLM generations, your Python import time might not matter much.
But you may still want to speed it up a bit. Then asking for the version, help, or getting an error for a wrong argument stays fast. That makes for a better user experience.
This is what I did with my phrasebook CLI ↗, a command line tool that generates translations, audio, and images with the OpenAI API.
I improved its execution time for showing the version from 0.584s to 0.075s. I did it by cutting down its import time with the method below.
6-step method to reduce Python import time
Basically, what you want to do is this. Assume your CLI has a
--version option, and
running it doesn't need heavy libraries and should return
fast.
-
Measure the execution time of your CLI. You can do this with
time, like this:$ time uv run mycli.py --version -
Find the libraries with a high import time using
-X importtimea CPython option ↗, like this:$ uv run python -X importtime mycli.py --version 2> importtime.txt -
Import those libraries inside functions. Instead of this:
from expensive_import_time import bar def foo(): x = do_something() return bar(x)Do this:
def foo(): from expensive_import_time import bar x = do_something() return bar(x)You can also think about swapping one library for another one. Pick one with a lower import time.
-
Postpone type annotation evaluation and store them as strings. Add this line at the top of the file:
from __future__ import annotationsIf you don't do that, you'll get a runtime
NameErrorfor type annotations that use classes you now import inside functions (see step 2). -
Keep your type checker happy. Import the libraries, now imported inside functions, in the following
ifblock (see step 2), withTYPE_CHECKINGimport fromtypinglibrary:from typing import TYPE_CHECKING if TYPE_CHECKING: from expensive_import_time import bar -
Measure the execution time of your CLI again, as in step 1. Check that it improved.
If you do this, you should see your CLI import time go down.
No spam. Unsubscribe anytime.
Example: Reducing import time of a CLI generating translations with OpenAI API
Let's look at the following Python CLI.
It's an overly simplified version of my
phrasebook CLI ↗. It's a
typer ↗
app. It takes one French word as an argument,
translates it to English using the
OpenAI API ↗, and saves the translation pair to the
translations.csv file. It
also has a
--version option.
After installing the dependencies and exporting your OpenAI API
key, you can translate the word "cusine"
("kitchen") and save it in
translations.csv like
this:
$ uv init
$ uv install typer openai pandas
$ export OPENAI_API_KEY=<your-api-key>
$ uv run translate_with_slow_import_time.py cuisine
The file
translate_with_slow_import_time.py
is defined like this:
# translate_with_slow_import_time.py
from pathlib import Path
from typing import Annotated
import logging
import pandas as pd
from pydantic import BaseModel
from openai import OpenAI
import typer
__version__ = "0.1.0"
logger = logging.getLogger(__name__)
TRANSLATIONS_FILE = Path("translation.csv")
COLUMNS = ["french", "english"]
app = typer.Typer(pretty_exceptions_enable=False)
class Translation(BaseModel):
french: str
english: str
def generate_translation(word: str, client: OpenAI) -> tuple[str, str] | None:
response = client.responses.parse(
model="gpt-5.2",
instructions="Translate the following French word to English. Return the pair.",
input=word,
text_format=Translation,
max_output_tokens=256,
)
if tr := response.output_parsed:
return (tr.french, tr.english)
return None
def save_translation(translation: tuple[str, str], translations_file: Path) -> None:
if translations_file.exists():
translations_df = pd.read_csv(translations_file, dtype="string")
else:
translations_df = pd.DataFrame(columns=pd.Index(COLUMNS), dtype="string")
new_translation_df = pd.DataFrame(
[translation], columns=pd.Index(COLUMNS), dtype="string"
)
updated = (
pd.concat([translations_df, new_translation_df], ignore_index=True)
if not translations_df.empty
else new_translation_df
)
updated.to_csv(translations_file, index=False)
def version_callback(version: bool):
if version:
print(f"version {__version__}")
raise typer.Exit()
@app.command()
def run(
word: str,
version: Annotated[
bool | None,
typer.Option("--version", callback=version_callback, is_eager=True),
] = None,
) -> None:
client = OpenAI()
translation = generate_translation(word, client)
if not translation:
logger.error(f"Couldn't generate translation for '{word}'.")
else:
save_translation(translation, TRANSLATIONS_FILE)
if __name__ == "__main__":
app()
Step 1 - Measuring the execution time
Let's start by measuring the execution time.
$ time uv run translate_with_slow_import_time.py --version
version 0.1.0
real 0m0.578s
user 0m1.687s
sys 0m0.075s
Step 2 - Finding libraries with high import time
Now we run the following command. It reports import time for
our CLI's libraries, including their dependencies. This
gives us a 1400-row table in the
importtime.txt file.
$ uv run python -X importtime translate_with_slow_import_time.py --version 2> importtime.txt
Here's a snippet. We keep only the top-level imported libraries:
import time: self [us] | cumulative | imported package
import time: 281 | 848 | _frozen_importlib_external
import time: 83 | 146 | zipimport
import time: 420 | 926 | encodings
import time: 121 | 121 | encodings.utf_8
import time: 66 | 66 | _signal
import time: 99 | 190 | io
import time: 332 | 1454 | site
import time: 1107 | 6819 | pathlib
import time: 1586 | 2117 | typing
import time: 1208 | 4109 | logging
import time: 229 | 183804 | pandas
import time: 180 | 29859 | pydantic
import time: 183 | 1635 | pydantic._internal._config
import time: 2246 | 3140 | pydantic._internal._decorators
import time: 207 | 971 | pydantic._internal._fields
import time: 194 | 714 | pydantic._internal._mock_val_ser
import time: 366 | 7539 | pydantic._internal._model_construction
import time: 374 | 221018 | openai
import time: 144 | 4681 | typer
The libraries with the biggest impact on import time are
openai at 221ms,
pandas at 183ms, and
pydantic at 29ms.
In the next step, we'll import them inside functions.
Note that we use
pandas only to read and
write CSV files. So we could swap it for the built-in
csv library.
Step 3 - Importing libraries inside functions
Next we'll tweak the CLI. We'll save the updated
version as
translate_with_better_import_time.py.
Here we omit the parts of the code that stay the same. We only
show the new places where we import
openai,
pandas, and
pydantic. For
pydantic, we also move
the Translation model
definition inside the function, along with the
pydantic import.
# translate_with_better_import_time.py
from pathlib import Path
from typing import Annotated
import logging
import typer
# ...
def generate_translation(word: str, client: OpenAI) -> tuple[str, str] | None:
from pydantic import BaseModel
class Translation(BaseModel):
french: str
english: str
# ...
def save_translation(translation: tuple[str, str], translations_file: Path) -> None:
import pandas as pd
# ...
# ...
@app.command()
def run(...) -> None:
from openai import OpenAI
# ...
if __name__ == "__main__":
app()
If we try to run it, we get an expected
NameError:
$ uv run translate_with_better_import_time.py --version
Traceback (most recent call last):
File "/home/tony/work/python/py0024-import-time-typer-app-openai-pandas-pydantic-importtime/translate_with_better_import_time.py", line 17, in <module>
def generate_translation(word: str, client: OpenAI) -> tuple[str, str] | None:
^^^^^^
NameError: name 'OpenAI' is not defined
Step 4 - Postponing type annotation evaluation and storing them as strings
To fix that NameError,
we add this line at the top of the script:
from __future__ import annotations
Then the CLI works fine:
$ uv run translate_with_better_import_time.py --version
version 0.1.0
Step 5 - Making the type checker happy
Now we can see that the
ty type checker
isn't happy anymore:
$ ty check translate_with_better_import_time.py
error[unresolved-reference]: Name `OpenAI` used when not defined
--> translate_with_better_import_time.py:18:45
|
18 | def generate_translation(word: str, client: OpenAI) -> tuple[str, str] | None:
| ^^^^^^
19 | from pydantic import BaseModel
|
info: rule `unresolved-reference` is enabled by default
Found 1 diagnostic
To fix this, we add the following to our script:
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from openai import OpenAI
So the final file is:
# translate_with_better_import_time.py
from __future__ import annotations
from pathlib import Path
from typing import Annotated, TYPE_CHECKING
import logging
import typer
__version__ = "0.1.0"
if TYPE_CHECKING:
from openai import OpenAI
logger = logging.getLogger(__name__)
TRANSLATIONS_FILE = Path("translation.csv")
COLUMNS = ["french", "english"]
app = typer.Typer(pretty_exceptions_enable=False)
def generate_translation(word: str, client: OpenAI) -> tuple[str, str] | None:
from pydantic import BaseModel
class Translation(BaseModel):
french: str
english: str
response = client.responses.parse(
model="gpt-5.2",
instructions="Translate the following French word to English. Return the pair.",
input=word,
text_format=Translation,
max_output_tokens=256,
)
if tr := response.output_parsed:
return (tr.french, tr.english)
return None
def save_translation(translation: tuple[str, str], translations_file: Path) -> None:
import pandas as pd
if translations_file.exists():
translations_df = pd.read_csv(translations_file, dtype="string")
else:
translations_df = pd.DataFrame(columns=pd.Index(COLUMNS), dtype="string")
new_translation_df = pd.DataFrame(
[translation], columns=pd.Index(COLUMNS), dtype="string"
)
updated = (
pd.concat([translations_df, new_translation_df], ignore_index=True)
if not translations_df.empty
else new_translation_df
)
updated.to_csv(translations_file, index=False)
def version_callback(version: bool):
if version:
print(f"version {__version__}")
raise typer.Exit()
@app.command()
def run(
word: str,
version: Annotated[
bool | None,
typer.Option("--version", callback=version_callback, is_eager=True),
] = None,
) -> None:
from openai import OpenAI
client = OpenAI()
translation = generate_translation(word, client)
if not translation:
logger.error(f"Couldn't generate translation for '{word}'.")
else:
save_translation(translation, TRANSLATIONS_FILE)
if __name__ == "__main__":
app()
Then type checking is happy:
$ ty check translate_with_better_import_time.py
All checks passed!
Step 6 - Measuring the execution time
Finally, we can check that we've improved our execution time. Elapsed time drops from 0.578s to 0.075s. We got there by cutting down the import time of our CLI.
$ time uv run translate_with_better_import_time.py --version
version 0.1.0
real 0m0.075s
user 0m0.053s
sys 0m0.022s
That's all I have for today! Talk soon 👋