メインコンテンツまでスキップ

「python」タグの記事が6件件あります

全てのタグを見る

asdfでPythonをバージョン管理

· 約8分
Mikyan
白い柴犬

Pythonのバージョン管理にasdfを使用することで、複数のPythonバージョンを簡単に管理できるようになりました。

asdfはPythonだけでなく、Node.js、Javaなど複数のランタイムを統一的に管理できる非常に便利なツールです。

以下の手順はasdf v0.18.0に基づいて説明します。

asdfとは

asdfは複数のプログラミング言語のランタイムバージョンを管理するためのツールです。pyenv、nvm、rbenvなどの個別のバージョン管理ツールを統一的に置き換えることができます。

主な利点

  • 複数の言語を一つのツールで管理
  • プロジェクトごとのバージョン設定
  • 豊富なプラグインエコシステム

asdfとvenvの違い

asdf: Pythonインタープリター自体のバージョン管理

  • Python 3.11.7、3.12.1など異なるPythonバージョンを切り替え

venv: パッケージ(ライブラリ)の分離管理

  • 同じPythonバージョンでもプロジェクトごとに異なるパッケージセットを使用
  • 依存関係の競合を防ぐ

両方を組み合わせることで、Pythonバージョンとパッケージの両方を適切に管理できます。

asdfとvenvの違い

asdf: Pythonインタープリター自体のバージョン管理

  • Python 3.11.7、3.12.1など異なるPythonバージョンを切り替え

venv: パッケージ(ライブラリ)の分離管理

  • 同じPythonバージョンでもプロジェクトごとに異なるパッケージセットを使用
  • 依存関係の競合を防ぐ

両方を組み合わせることで、Pythonバージョンとパッケージの両方を適切に管理できます。

asdfのインストール

macOSの場合

公式ガイドに従ってインストールします。

brew install asdf

zshの設定

~/.zshrcファイルに以下を追加:

# asdfの設定
export PATH="${ASDF_DATA_DIR:-$HOME/.asdf}/shims:$PATH"
export ASDF_DATA_DIR="$HOME/.asdf/data"

補完機能の設定

# 補完ディレクトリの作成
mkdir -p "${ASDF_DATA_DIR:-$HOME/.asdf}/completions"

# zsh補完ファイルの生成
asdf completion zsh > "${ASDF_DATA_DIR:-$HOME/.asdf}/completions/_asdf"

再度~/.zshrcに以下を追加:

# asdf補完の有効化
fpath=(${ASDF_DATA_DIR:-$HOME/.asdf}/completions $fpath)
autoload -Uz compinit && compinit

設定を反映:

source ~/.zshrc

Pythonプラグインのインストールと設定

プラグインの追加

# Pythonプラグインを追加
asdf plugin add python

# インストール済みプラグインの確認
asdf plugin list

利用可能なPythonバージョンの確認

# インストール可能なPythonバージョンを表示
asdf list all python

Pythonのインストール

# 最新版をインストール
asdf install python latest

# 特定バージョンをインストール(例)
asdf install python 3.11.7

# インストール済みバージョンの確認
asdf list python

バージョンの設定

asdf v0.18.0ではasdf setコマンドを使用します:

# グローバル設定(HOMEフォルダ以下全体で使用)
asdf set --home python latest

# 現在の設定確認
asdf current python

出力例:

Name            Version         Source
python 3.13.5 /Users/username/.tool-versions
# Pythonコマンドの場所確認
which python3

出力例:

/Users/username/.asdf/data/shims/python3

プロジェクト固有の設定

プロジェクトディレクトリで特定のPythonバージョンを使用する場合:

# プロジェクトディレクトリに移動
cd /path/to/your/project

# プロジェクト用のPythonバージョンを設定
asdf set python 3.11.7

# .tool-versionsファイルが作成される
cat .tool-versions

出力例:

python 3.11.7

このファイルがあるディレクトリとその配下では、指定されたPythonバージョンが自動的に使用されます。

よく使うコマンド一覧

# 現在使用中のバージョン確認
asdf current

# 特定言語の現在バージョン
asdf current python

# インストール済みバージョン一覧
asdf list python

# 利用可能バージョン一覧(最新10件)
asdf list all python | tail -10

# プラグインの更新
asdf plugin update python


# 古いバージョンの削除
asdf uninstall python 3.10.0

トラブルシューティング

Pythonが見つからない場合

# shimの再作成
asdf reshim python

# パスの確認
echo $PATH | grep asdf

権限エラーが発生する場合

# asdfディレクトリの権限確認
ls -la ~/.asdf/

# 必要に応じて権限修正
chmod -R 755 ~/.asdf/

設定が反映されない場合

# .zshrcの再読み込み
source ~/.zshrc

# 現在のシェル設定確認
echo $ASDF_DATA_DIR
echo $PATH | grep asdf

実際の使用例

新しいプロジェクトでの設定

# プロジェクトディレクトリを作成
mkdir my-python-project
cd my-python-project

# プロジェクト用のPythonバージョンを指定
asdf set --local python 3.11.7

# 仮想環境の作成
python -m venv venv
source venv/bin/activate

# パッケージのインストール
pip install requests flask

チームでの.tool-versionsファイル共有

# .tool-versionsファイルをGitで共有
echo "python 3.11.7" > .tool-versions
echo "nodejs 18.19.0" >> .tool-versions

# チームメンバーは以下でバージョンをインストール
asdf install

設定が反映されない場合

# .zshrcの再読み込み
source ~/.zshrc

# 現在のシェル設定確認
echo $ASDF_DATA_DIR
echo $PATH | grep asdf

実際の使用例

新しいプロジェクトでの設定

# プロジェクトディレクトリを作成
mkdir my-python-project
cd my-python-project

# 1. asdfでPythonバージョンを指定(インタープリター管理)
asdf set python 3.11.7

# 2. venvで仮想環境を作成(パッケージ管理)
python -m venv venv
source venv/bin/activate

# 3. プロジェクト固有のパッケージをインストール
pip install requests flask

# 結果:Python 3.11.7 + 独立したパッケージ環境

なぜvenvも必要なのか?

問題:venvを使わない場合

# プロジェクトA
cd project-a
asdf set python 3.11.7
pip install django==4.2.0 requests==2.28.0

# プロジェクトB(同じPython 3.11.7)
cd ../project-b
pip install flask==2.3.0 requests==2.31.0 # ← requests 2.28.0が上書きされる

# プロジェクトAに戻ると...
cd ../project-a
# django が requests 2.31.0 で動作しない可能性!

解決:asdf + venv

# プロジェクトA: Python 3.11.7 + 独立した環境
project-a/ → django 4.2.0 + requests 2.28.0

# プロジェクトB: Python 3.11.7 + 独立した環境
project-b/ → flask 2.3.0 + requests 2.31.0

チームでの.tool-versionsファイル共有

# .tool-versionsファイルをGitで共有
echo "python 3.11.7" > .tool-versions
echo "nodejs 18.19.0" >> .tool-versions

# チームメンバーは以下でバージョンをインストール
asdf install

まとめ

asdfを使用することで以下のメリットが得られます:

  • 統一的な管理: 複数の言語のバージョン管理を一つのツールで実現
  • プロジェクト固有設定: .tool-versionsファイルでプロジェクトごとの環境を管理
  • 簡単な切り替え: コマンド一つでバージョン切り替えが可能
  • チーム開発: .tool-versionsファイルでチーム全体の環境を統一

Python開発での環境構築が大幅に簡素化されるため、特に複数のプロジェクトを扱う開発者にはぜひ活用していただきたいツールです。

Python Development Setup, pyenv, uv

· 約1分
Mikyan
白い柴犬

This article introduce how to setup python development environment for Web development.

For data science / machine learning development environment, you might prefer other approaches, like anaconda.

Details

Install uv

US is the modern solution and handles both Python versions AND package management. It's like having nvm + npm in one tool.

curl -LsSf https://astral.sh/uv/install.sh | sh

source $HOME/.local/bin/env

Use uv

# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh

# Install and use Python
uv python install 3.12
uv python pin 3.12 # Sets Python version for project

Use uv to initialize the project

cd my-project

uv init --python 3.12

Then the following files are generated:

tree
├── main.py
├── pyproject.toml
└── README.md

pyproject.toml is just like package.json

You can use

# install deps
# it automatically use venv
uv sync

# add new package
uv add fastapi sqlalchemy alembic

Check the pyproject.toml file's

If want to init a fastapi project, using fastapi's template here might be a better choice.

git clone git@github.com:fastapi/full-stack-fastapi-template.git my-full-stack
cd my-full-stack
git remote set-url origin git@github.com:octocat/my-full-stack.git
git push -u origin master

Python File System Operations

· 約3分
Mikyan
白い柴犬
  • Use pathlib to handle file paths and operations by default
  • For finegraind read/write over file I/O (streaming), use context manager
  • Use the tempfile Module for Temporary Files/Directories
  • Use shutil for High-level Operations

Details

Use pathlib is the modern, oo way to handle file paths, and operations.

from pathlib import Path

# Create Path Objects:
my_file = Path("data")

# Use Path methods
my_file.exists()

# Content I/O

my_file.read_text()
#Path.write_text()
#Path.write_text()
#Path.write_bytes()

Create a Path Object

# From CWD
#
current_dir = Path.cwd()
# From Home
home_dir = Path.home()
# From absolute Paths
abs_path = Path("/usr/local/bin/python")
# From relative paths (relative to CWD)
relative_path = Path("data/input.csv")

# Create path by manipulation
base_dir = Path('/opt/my_app')
config_file = base_dire / "config" / "settings.yaml"

parent_dir = config_file.parent

Dealing with file name

# Get file / directory name
config_file.name

# Getting Stem
config_file.stem # settings

# Getting suffix
config_file.suffix
config_file.suffixes

# Get absolute path
config_file.resolve()
# or
config_file.absolute()

# Get relative path
relative_to = config_file.relative_to(project_root)

Check / Query File System

my_file.exists()

my_file.is_file()
my_file.is_dir()
my_file.is_symlink()

# Statistics
stats = temp_file.stat()

Operations

# Create directories
new_dir.mkdir()

# create empty file
empty_file.touch()

# delete file

file_to_delete.unlink()


# delete empty directories

empty_folder.rmdir()

# rename / move file or directories

old_path.rename(new_path)

# Changing suffix
config_file.with_suffix('.yml')

File Content I/O

config_path = Path("config.txt")
config_path.write_text("debug=True\nlog_level=INFO")
content = config_path.read_text()

binary_data_file = Path("binary_data.bin")
binary_data_file.write_bytes(b'\x01\x02\x03\x04')
data = binary_data_file.read_bytes()
print(f"Binary data: {data}")

directory iteration / traversal

# List
project_root.iterdir()

# Globbing
project_root.glob("*.py")

# Walking Directory Tree (Python 3.12+)
project_root.walk()

Use Context Managers (with open(...)) for File I/O

When you need more fine-grained control over file reading/writing, (streaming large files, specific encoding, or binary modes), use the with statement.

try:
with open("my_large_file.csv", "w", encoding="utf-8") as f:
f.write("Header1,Header2\n")
for i in range(1000):
f.write(f"data_{i},value_{i}\n")
except IOError as e:
print(f"Error writing file: {e}")

Use the tempfile Module for Temporary Files/Directories

import tempfile
from pathlib import Path

# Using a temporary directory
with tempfile.TemporaryDirectory() as tmp_dir_str:
tmp_dir = Path(tmp_dir_str)
temp_file = tmp_dir / "temp_report.txt"
temp_file.write_text("Ephemeral data.")
print(f"Created temporary file at: {temp_file}")
# At the end of the 'with' block, tmp_dir_str and its contents are deleted
print("Temporary directory removed.")

Use shutil for High-level Operations

shutil Focuses on operations that involing moving, copying, or deleting entire trees of files and directories, or other utility functions that go beyond a single Path obejct's scope.

import shutil

source_dir = Path("my_data")
destination_dir = Path("backup_data")


try:
shutil.copytree(source_dir, destination_dir)
print(f"Copied '{source_dir}' to '{destination_dir}'")
except FileExistsError:
print(f"Destination '{destination_dir}' already exists. Skipping copy.")
except Exception as e:
print(f"Error copying tree: {e}")
import shutil
from pathlib import Path

dir_to_delete = Path("backup_data") # Assuming this exists from the copytree example

if dir_to_delete.exists():
print(f"Deleting '{dir_to_delete}'...")
shutil.rmtree(dir_to_delete)
print("Directory deleted.")
else:
print(f"Directory '{dir_to_delete}' does not exist.")

Zip / Tarring

shutil even can create compressed archieves, and unpack them.

archive_path = shutil.make_archive(archive_name, 'zip', source_dir)
print(f"Created archive: {archive_path}")

Copy File Metadata

  • shutil.copystat(src, dst) copy permission bits, last access time, last modification time and flags from one file to another
  • shutil.copy2(src, dst) copies the file and metadata

Getting Disk Usage

usage = shutil.disk_usage(Path(".")) # Check current directory's disk
print(f"Total: {usage.total / (1024**3):.2f} GB")
print(f"Used: {usage.used / (1024**3):.2f} GB")
print(f"Free: {usage.free / (1024**3):.2f} GB")

Do not

  • Avoid os.system() or subprocess.run() for file operations in most case

Python Async Programming

· 約2分
Mikyan
白い柴犬

Python's asynchronous programming is built around the asyncio module, and async/await keywords.

Concept

coroutine is a special type of function that represents a computation that can be paused and resumed.

A coroutine is defined with async def.

For example the following function is a coroutine

async def my_coroutine():
print("Coroutine started")
await asyncio.sleep(1) # This is a pause point
print("Coroutine resumed after 1 second")
return "Done!"
  • Inside an async def function, the await keyword is used to pause the execution of the current coroutine.
  • When a coroutine awaits something, it singals to the event loop that it's waiting for an I/O operation or some other asynchronous event to complete
  • While the current coroutine is paused, the event loop can switch its attention to other coroutines or tasks that are ready to run, ensuring efficient use of the CPU.

Why async def functions can be paused

  • A regular def function is executed directly by the Python interpreter, when you call it the interpreter's program counter moves through its instructions sequentially. If it encounters something that blocks, the entire thread stops until that blocking operation is done.

  • An Async def function, when called doesn't immediately execute its body. Instead it returns a coroutine object. This object is a special kind of generator that the asyncio event loop knows how to manage.

  • use the await keyword to singal an intentional pause.

  • if there is no await inside an async def function, it will run like regular synchronous function until completion.

The Event loop is the orchestrator.

  • The asyncio event loop is continuously monitoring a set of registered coroutines/tasks. It's like a dispatcher.

  • State Preservation: (Generators)

Conceptually, Python coroutines are built on top of generators. When a generator yields a value, its local state (variables, instruction pointer) is saved. When next() is called on it again, it resumes from where it left off.

Similarly, when an async def function awaits, its internal state is saved. When the awaited operation completes, the coroutine is "sent" a signal to resume, and it continues execution from the line immediately following the await.

Why Async is important for web framework

Python Pydantic

· 約3分
Mikyan
白い柴犬

Pydantic can extend standard Python classes to provide robust data handling features. BaseModel is the fundamental class in Pydantic. By inheriting from BaseModel, Python classes become Pydantic models, gaining capabilities for:

  • Data Validation: Automatically check the types and values of class attributes against your defined type hints. It raises a ValidationError with clear, informative messages if incoming data doesn't confirm.
  • Data Coercion: Pydantic can intelligently convert input data to the expected type where appropriate.
  • Instantiation: Creates instances of your model by passing keyword arguments or a dictionary to the constructor

Details

Inheriting from BaseModel, Python classes become Pydantic models

You can use Field function, or annotation to add more specific constraints and metadata to your fields.



from typing import Optional
from pydantic import BaseModel, Field, EmailStr

class User(BaseModel):
name: str
age: int
email: str

# Valid data
user = User(name="Alice", age=30, email="alice@example.com")
print(user)

# Invalid data will raise a ValidationError
try:
User(name="Bob", age="twenty", email="bob@invalid")
except Exception as e:
print(e)


class Product(BaseModel):
id: int = Field(..., gt=0, description="Unique product identifier")
name: str = Field(..., min_length=2, max_length=100)
price: float = Field(..., gt=0.0)
description: Optional[str] = None # Optional field
seller_email: EmailStr # Pydantic's built-in email validation

product = Product(id=1, name="Laptop", price=1200.50, seller_email="seller@store.com")
print(product)

Create Pydantic Model

Directly use Constructor with unpacked dictionary, or model_validate do validate and convert dict to model.

model_validate_json do validate and convert JSON string to a model.

user_data = {
"name": "Alice",
"age": 30,
"email": "alice@example.com"
}
user_model = User(**user_data)

user_model = User.model_validate(user_data)


class Movie(BaseModel):
title: str
year: int
director: str
genres: list[str]

# Your JSON string data
json_string = '''
{
"title": "Inception",
"year": 2010,
"director": "Christopher Nolan",
"genres": ["Sci-Fi", "Action", "Thriller"]
}
'''
movie_model = Movie.model_validate_json(json_string)

Validate dictionary and JSON string: model_validate(), model_validate_json()

model_validate: validate a Python dictionary model_validate_json: validate a JSON string

from pydantic import BaseModel
import json

class Item(BaseModel):
name: str
quantity: int

data_dict = {"name": "Apple", "quantity": 5}
item1 = Item.model_validate(data_dict)
print(item1)

json_data = '{"name": "Banana", "quantity": 10}'
item2 = Item.model_validate_json(json_data)
print(item2)

Serialization: model_dump(), model_dump_json().

model_dump: to Python dictionary model_dump_json: to JSON

from pydantic import BaseModel

class City(BaseModel):
name: str
population: int

tokyo = City(name="Tokyo", population=14000000)
print(tokyo.model_dump())
print(tokyo.model_dump_json(indent=2)) # Pretty print JSON

Custom Validators: @field_validator, @model_validator

from datetime import date
from pydantic import BaseModel, ValidationError, field_validator, model_validator

class Event(BaseModel):
name: str
start_date: date
end_date: date

@field_validator('name')
@classmethod
def check_name_is_not_empty(cls, v):
if not v.strip():
raise ValueError('Event name cannot be empty')
return v

@model_validator(mode='after') # 'after' means after field validation
def check_dates_order(self):
if self.start_date > self.end_date:
raise ValueError('Start date must be before end date')
return self

try:
event1 = Event(name="Conference", start_date="2025-07-20", end_date="2025-07-22")
print(event1)
except ValidationError as e:
print(e)

try:
Event(name="Bad Event", start_date="2025-07-25", end_date="2025-07-23")
except ValidationError as e:
print(e)

Nested Models

from pydantic import BaseModel
from typing import List

class Address(BaseModel):
street: str
city: str
zip_code: str

class Customer(BaseModel):
customer_id: int
name: str
shipping_addresses: List[Address]

customer_data = {
"customer_id": 123,
"name": "Jane Doe",
"shipping_addresses": [
{"street": "123 Main St", "city": "Anytown", "zip_code": "12345"},
{"street": "456 Oak Ave", "city": "Otherville", "zip_code": "67890"}
]
}

customer = Customer.model_validate(customer_data)
print(customer)

JSON Schema Generation

from pydantic import BaseModel

class Task(BaseModel):
id: int
title: str
completed: bool = False

print(Task.model_json_schema(indent=2))

References

document on how to use it.

Python Type Hint

· 約2分
Mikyan
白い柴犬

Since Python 3.5, Python introduced Type hint. And it become more and more powerful.

With it You can set the type for your variable for readibility.

Type hints are hints, not enforcements. Python still runs the code even if types don't match.

Usage

# Primitives
name: str = "Tom"
age: int = 30
salary: float = 500.5
is_active: bool = True

# Collections
numbers: list = [1,2,3]
scores: tuple = (90, 85, 88)
unique: set = {1, 2, 3}
data: dict = {"key": "value"}


# Specific Collection Types

from typing import List, Dict, Tuple, Set

names: List[str] = ["Alice", "Bob", "Charlie"]
user: Dict[str, str] = {
"name": "John",
"email": "john@example.com"
}
person: Tuple[str, int, bool] = ("Alice", 30, True)
unique_ids: Set[int] = {1, 2, 3, 4, 5}

# after python 3.9 the following are also work
names: list[str] = ["Alice", "Bob", "Charlie"]
user: dict[str, str] = {
"name": "John",
"email": "john@example.com"
}person: tuple[str, int, bool] = ("Alice", 30, True)
unique_ids: set[int] = {1, 2, 3, 4, 5}

# Optional

from typing import Optional

# can be string or None
middle_name: Optional[str] = None

# Union
from typing import Union
number: Union[int, float] = 10
number = 10.5


# Literal for exact values
from typing import Literal
Status = Literal["pending", "approved", "rejected"]

def process_order(status: Status) -> None:
pass

# TypedDict
from typing import TypedDict
# TypedDict for dictionary structures
class UserDict(TypedDict):
name: str
age: int
email: str


# Class
user: User = get_user(123)

# method
def calculate_bmi(weight: float, height: float) -> float:
return weight / (height ** 2)

# Self
from typing import Self

class User:
def copy(self) -> Self: # Returns same class type
return User()