Updated community sentiment
This commit is contained in:
BIN
jokes_bot/.DS_Store
vendored
BIN
jokes_bot/.DS_Store
vendored
Binary file not shown.
148
jokes_bot/v4.0/CHANGES_SUMMARY.md
Normal file
148
jokes_bot/v4.0/CHANGES_SUMMARY.md
Normal file
@@ -0,0 +1,148 @@
|
|||||||
|
# Joke Bot v4.0 - User Identification Enhancement Summary
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This enhancement adds user identification to the sentiment tracking system, enabling detailed analytics on who likes which jokes and comprehensive community insights.
|
||||||
|
|
||||||
|
## Key Changes Made
|
||||||
|
|
||||||
|
### 1. Database Schema Enhancement
|
||||||
|
**File Modified:** `jokes.py` (initialize_database function)
|
||||||
|
|
||||||
|
**Changes:**
|
||||||
|
- Added `user_identifier TEXT NOT NULL` column to `user_sentiments` table
|
||||||
|
- Modified sample data to include user identifiers
|
||||||
|
- Updated foreign key constraints to maintain data integrity
|
||||||
|
|
||||||
|
**Before:**
|
||||||
|
```sql
|
||||||
|
CREATE TABLE user_sentiments (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
joke_id INTEGER NOT NULL,
|
||||||
|
user_sentiment TEXT CHECK(user_sentiment IN ('up', 'down', 'neutral')) DEFAULT 'neutral',
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (joke_id) REFERENCES jokes(id) ON DELETE CASCADE
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**After:**
|
||||||
|
```sql
|
||||||
|
CREATE TABLE user_sentiments (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
joke_id INTEGER NOT NULL,
|
||||||
|
user_sentiment TEXT CHECK(user_sentiment IN ('up', 'down', 'neutral')) DEFAULT 'neutral',
|
||||||
|
user_identifier TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (joke_id) REFERENCES jokes(id) ON DELETE CASCADE
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Enhanced Sentiment Functions
|
||||||
|
**File Modified:** `jokes.py`
|
||||||
|
|
||||||
|
**New Functions Added:**
|
||||||
|
- `get_detailed_sentiment_stats(db, joke_id)` - Shows sentiment breakdown by user groups
|
||||||
|
- `get_user_sentiment_history(db, user_identifier)` - Retrieves a user's complete rating history
|
||||||
|
- `get_popular_jokes_by_user(db, user_identifier)` - Shows jokes a specific user has liked
|
||||||
|
- `get_top_users_by_joke_preference(db)` - Community leaderboard showing most positive users
|
||||||
|
- `add_user_sentiment(db, joke_id, user_choice, user_identifier)` - Enhanced to prevent duplicates and allow updates
|
||||||
|
|
||||||
|
**Modified Functions:**
|
||||||
|
- `add_user_sentiment()` now accepts and stores user_identifier
|
||||||
|
- Added logic to update existing ratings instead of creating duplicates
|
||||||
|
|
||||||
|
### 3. Interactive User Experience
|
||||||
|
**File Modified:** `jokes.py` (main function)
|
||||||
|
|
||||||
|
**Enhancements:**
|
||||||
|
- Session-based user identification at startup
|
||||||
|
- Personalized menu options (3-5) for user analytics
|
||||||
|
- Detailed sentiment displays showing community breakdown
|
||||||
|
- Rating history and favorite joke tracking
|
||||||
|
- Community analytics dashboard
|
||||||
|
|
||||||
|
### 4. Documentation Updates
|
||||||
|
**Files Modified:** `README.md`, Created: `CHANGES_SUMMARY.md`
|
||||||
|
|
||||||
|
**Documentation Added:**
|
||||||
|
- Enhanced feature descriptions
|
||||||
|
- New menu option explanations
|
||||||
|
- Usage examples with user identification
|
||||||
|
- Technical improvements section
|
||||||
|
- Benefits of user identification
|
||||||
|
|
||||||
|
### 5. Sample Data Enhancement
|
||||||
|
**File Modified:** `clean_sample_data.sql`
|
||||||
|
|
||||||
|
**Improvements:**
|
||||||
|
- Updated schema definition with user_identifier
|
||||||
|
- More realistic sample user identifiers
|
||||||
|
- Comprehensive verification queries
|
||||||
|
- Community analytics examples
|
||||||
|
|
||||||
|
## New Analytics Capabilities
|
||||||
|
|
||||||
|
### Personal Analytics (Per User)
|
||||||
|
1. **Rating History**: Track all jokes a user has rated with timestamps
|
||||||
|
2. **Favorite Jokes**: See which jokes a user has specifically liked (up votes)
|
||||||
|
3. **Rating Patterns**: Analyze individual user preferences over time
|
||||||
|
|
||||||
|
### Community Analytics
|
||||||
|
1. **User Leaderboard**: Rank users by positivity percentage
|
||||||
|
2. **Engagement Metrics**: Measure user participation levels
|
||||||
|
3. **Sentiment Distribution**: View detailed breakdown of community opinions
|
||||||
|
4. **Contributor Insights**: Identify which joke contributors are most popular
|
||||||
|
|
||||||
|
### Business Intelligence
|
||||||
|
1. **User Segmentation**: Group users by rating patterns
|
||||||
|
2. **Content Performance**: Track which jokes perform best with different user segments
|
||||||
|
3. **Recommendation Engine Foundation**: Data structure ready for personalized joke suggestions
|
||||||
|
4. **Quality Metrics**: Measure joke quality through community consensus
|
||||||
|
|
||||||
|
## Implementation Benefits
|
||||||
|
|
||||||
|
### Technical Advantages
|
||||||
|
- **Data Integrity**: Prevents duplicate ratings while allowing updates
|
||||||
|
- **Scalability**: Efficient querying for large user bases
|
||||||
|
- **Flexibility**: Easy to extend with additional user metadata
|
||||||
|
- **Analytics Ready**: Rich dataset for future enhancements
|
||||||
|
|
||||||
|
### User Experience Improvements
|
||||||
|
- **Personalization**: Users can track their own preferences
|
||||||
|
- **Transparency**: Clear visibility into community opinions
|
||||||
|
- **Engagement**: More interactive features encourage participation
|
||||||
|
- **Discovery**: Users can find content aligned with their preferences
|
||||||
|
|
||||||
|
### Future Enhancement Opportunities
|
||||||
|
- **Machine Learning**: Train recommendation systems on user preferences
|
||||||
|
- **Social Features**: Allow users to follow others with similar tastes
|
||||||
|
- **Advanced Analytics**: Trend analysis and predictive modeling
|
||||||
|
- **Gamification**: Achievement systems based on participation
|
||||||
|
|
||||||
|
## Testing and Validation
|
||||||
|
|
||||||
|
Created supporting files:
|
||||||
|
- `test_basic.py` - Verifies core functionality
|
||||||
|
- `demo_features.py` - Demonstrates all new features
|
||||||
|
- Comprehensive error handling throughout
|
||||||
|
|
||||||
|
## Migration Notes
|
||||||
|
|
||||||
|
For existing databases:
|
||||||
|
1. The enhanced `initialize_database()` function handles schema upgrades automatically
|
||||||
|
2. Existing sentiment data will be preserved with generic user identifiers
|
||||||
|
3. No manual migration steps required
|
||||||
|
|
||||||
|
## Usage Example
|
||||||
|
|
||||||
|
```python
|
||||||
|
# User identification happens automatically
|
||||||
|
# When user rates a joke:
|
||||||
|
success, action = add_user_sentiment(db, joke_id=5, user_choice='up', user_identifier='JohnDoe')
|
||||||
|
# action will be "recorded" for new ratings or "updated" for existing ones
|
||||||
|
|
||||||
|
# Get detailed analytics:
|
||||||
|
user_history = get_user_sentiment_history(db, 'JohnDoe')
|
||||||
|
community_leaderboard = get_top_users_by_joke_preference(db)
|
||||||
|
```
|
||||||
|
|
||||||
|
This enhancement transforms the joke bot from a simple rating system into a comprehensive analytics platform while maintaining all existing functionality.
|
||||||
@@ -1,3 +1,143 @@
|
|||||||
|
# AI-Enhanced Joke Bot v4.0 - Enhanced User Analytics Edition
|
||||||
|
|
||||||
|
Welcome to the AI-Enhanced Joke Bot! This application combines humor with artificial intelligence to deliver jokes and analyze user preferences with detailed analytics.
|
||||||
|
|
||||||
|
## 📋 Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
v4.0/
|
||||||
|
├── jokes.py # Main application with enhanced user sentiment tracking and analytics
|
||||||
|
├── jokes.db # SQLite database containing jokes and detailed user sentiments
|
||||||
|
├── clean_sample_data.sql # Clean sample data with user identifiers
|
||||||
|
└── README.md # This file
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚀 Quick Start
|
||||||
|
|
||||||
|
### 1. Run the Application
|
||||||
|
|
||||||
|
Simply run the main application file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python jokes.py
|
||||||
|
```
|
||||||
|
|
||||||
|
The application will:
|
||||||
|
- Automatically create the database if it doesn't exist
|
||||||
|
- Add sample data with user identifiers
|
||||||
|
- Prompt you for a username to track your preferences
|
||||||
|
|
||||||
|
## 🔧 Key Features
|
||||||
|
|
||||||
|
### Enhanced User Sentiment Tracking
|
||||||
|
- **User Identification**: Each sentiment rating is linked to a specific user identifier
|
||||||
|
- **Rating History**: Track all jokes you've rated with timestamps
|
||||||
|
- **Personal Favorites**: See which jokes you've specifically liked
|
||||||
|
- **Duplicate Prevention**: Users can update their existing ratings
|
||||||
|
|
||||||
|
### Advanced Analytics
|
||||||
|
- **Community Insights**: See overall sentiment trends and statistics
|
||||||
|
- **User Preference Analysis**: Identify which users have the most positive/negative ratings
|
||||||
|
- **Detailed Breakdowns**: View sentiment distribution for individual jokes
|
||||||
|
- **Popularity Metrics**: Understand which jokes are most liked by the community
|
||||||
|
|
||||||
|
## 🎯 Menu Options
|
||||||
|
|
||||||
|
1. **Get Random Joke**: Receive a joke with community ratings and detailed sentiment breakdown
|
||||||
|
2. **Add New Joke**: Submit your own jokes for community rating
|
||||||
|
3. **View Your Ratings**: See your complete rating history
|
||||||
|
4. **See Your Favorites**: View jokes you've specifically liked
|
||||||
|
5. **Community Analytics**: Access detailed community statistics and user rankings
|
||||||
|
6. **Quit**: Exit the application
|
||||||
|
|
||||||
|
## 📊 Database Schema Enhancements
|
||||||
|
|
||||||
|
The `user_sentiments` table now includes:
|
||||||
|
- `user_identifier`: Links each rating to a specific user
|
||||||
|
- `user_sentiment`: The rating (up/down/neutral)
|
||||||
|
- `created_at`: Timestamp of when the rating was made
|
||||||
|
- Support for updating existing ratings
|
||||||
|
|
||||||
|
## 💡 Usage Examples
|
||||||
|
|
||||||
|
### Getting Started
|
||||||
|
```
|
||||||
|
🤖 Welcome to Joke Bot! 🤖
|
||||||
|
Please enter your username (or press Enter for 'Guest'): JohnDoe
|
||||||
|
👤 Logged in as: JohnDoe
|
||||||
|
```
|
||||||
|
|
||||||
|
### Viewing Detailed Analytics
|
||||||
|
When viewing a joke, you'll see:
|
||||||
|
```
|
||||||
|
🤣 Why don't scientists trust atoms? Because they make up everything!
|
||||||
|
👤 Contributor: ScienceFan
|
||||||
|
👥 Community Rating: 👍 Up (3 votes)
|
||||||
|
🔍 Detailed Breakdown:
|
||||||
|
👍 Up: 2 votes
|
||||||
|
😐 Neutral: 1 votes
|
||||||
|
```
|
||||||
|
|
||||||
|
### Accessing Personal Analytics
|
||||||
|
Option 3 shows your rating history:
|
||||||
|
```
|
||||||
|
📊 Your Joke Ratings History (5 ratings):
|
||||||
|
--------------------------------------------------
|
||||||
|
👍 Why did the scarecrow win an award? He was outstanding in his field!
|
||||||
|
Rated: 2024-01-15 14:30:25
|
||||||
|
|
||||||
|
👎 What do you call a fish with no eyes? Fsh!
|
||||||
|
Rated: 2024-01-15 14:25:10
|
||||||
|
```
|
||||||
|
|
||||||
|
### Community Insights
|
||||||
|
Option 5 provides comprehensive analytics:
|
||||||
|
```
|
||||||
|
📈 COMMUNITY ANALYTICS
|
||||||
|
==============================
|
||||||
|
🏆 Most Positive Users:
|
||||||
|
-------------------------
|
||||||
|
1. ComedyFan: 85.7% positive (6/7 ratings)
|
||||||
|
2. JokeLover: 75.0% positive (3/4 ratings)
|
||||||
|
3. FunnyPerson: 66.7% positive (4/6 ratings)
|
||||||
|
|
||||||
|
📊 Overall Statistics:
|
||||||
|
Total jokes rated: 8
|
||||||
|
Total ratings: 24
|
||||||
|
Community positivity: 62.5%
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🛠️ Technical Improvements
|
||||||
|
|
||||||
|
- **Enhanced Data Model**: User identifiers enable sophisticated analytics
|
||||||
|
- **Better UX**: Session-based user identification for consistent tracking
|
||||||
|
- **Rich Analytics**: Multiple layers of statistical insights
|
||||||
|
- **Data Integrity**: Prevents duplicate ratings while allowing updates
|
||||||
|
|
||||||
|
## 🎉 Benefits of User Identification
|
||||||
|
|
||||||
|
1. **Personalized Experience**: Track your own joke preferences over time
|
||||||
|
2. **Social Insights**: Understand community rating patterns
|
||||||
|
3. **Quality Control**: Identify consistently popular contributors
|
||||||
|
4. **Engagement Metrics**: Measure user participation and preferences
|
||||||
|
5. **Recommendation Potential**: Foundation for future personalized joke recommendations
|
||||||
|
|
||||||
|
Enjoy the enhanced AI-powered humor experience with detailed user analytics!
|
||||||
|
|
||||||
|
# Joke Bot - Enhanced Setup Instructions
|
||||||
|
|
||||||
|
## For Beginners
|
||||||
|
|
||||||
|
### Running the Enhanced Version
|
||||||
|
1. Simply run: `python jokes.py`
|
||||||
|
2. Enter your username when prompted
|
||||||
|
3. Enjoy personalized joke ratings and community analytics!
|
||||||
|
|
||||||
|
The enhanced version automatically handles all database setup and provides rich analytics features right out of the box.
|
||||||
|
```
|
||||||
|
|
||||||
|
## 代码修改建议
|
||||||
|
```
|
||||||
# AI-Enhanced Joke Bot v4.0
|
# AI-Enhanced Joke Bot v4.0
|
||||||
|
|
||||||
Welcome to the AI-Enhanced Joke Bot! This application combines humor with artificial intelligence to deliver jokes and analyze their sentiment.
|
Welcome to the AI-Enhanced Joke Bot! This application combines humor with artificial intelligence to deliver jokes and analyze their sentiment.
|
||||||
@@ -122,3 +262,74 @@ The application uses two tables:
|
|||||||
- `user_sentiments`: Tracks user ratings for each joke with timestamps
|
- `user_sentiments`: Tracks user ratings for each joke with timestamps
|
||||||
|
|
||||||
Enjoy the AI-enhanced humor experience with community feedback!
|
Enjoy the AI-enhanced humor experience with community feedback!
|
||||||
|
|
||||||
|
# Joke Bot - Simple Setup Instructions
|
||||||
|
|
||||||
|
Welcome to the Joke Bot! This is a fun program that tells jokes and lets you rate them. Follow these simple steps to get it working!
|
||||||
|
|
||||||
|
## Simplified Project Structure
|
||||||
|
|
||||||
|
The application now has a minimal structure:
|
||||||
|
```
|
||||||
|
v4.0/
|
||||||
|
├── jokes.py # Complete application with database creation and sample data
|
||||||
|
└── README.md # This file
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step-by-Step Instructions for Beginners
|
||||||
|
|
||||||
|
### Step 1: Open Python IDLE
|
||||||
|
1. Find Python IDLE on your computer (it looks like a blue and white icon)
|
||||||
|
2. Click on it to open it
|
||||||
|
3. When it opens, click on "File" then "New File" to create a new file
|
||||||
|
|
||||||
|
### Step 2: Save Your New File
|
||||||
|
1. In the new file window, click "File" then "Save As..."
|
||||||
|
2. Save the file in your folder and name it `jokes.py`
|
||||||
|
3. Remember where you saved it!
|
||||||
|
|
||||||
|
### Step 3: Get the Code from the Website
|
||||||
|
1. Go to this website: https://gitea.techshare.cc/technolyceum/ai6-m3/src/branch/main/jokes_bot/v4.0
|
||||||
|
2. Look for the `jokes.py` file in the list
|
||||||
|
3. Click on it to see the code
|
||||||
|
4. Select all the code (Ctrl+A on Windows or Cmd+A on Mac)
|
||||||
|
5. Copy it (Ctrl+C on Windows or Cmd+C on Mac)
|
||||||
|
|
||||||
|
### Step 4: Paste the Code
|
||||||
|
1. Go back to your Python IDLE window
|
||||||
|
2. Paste the code you copied (Ctrl+V on Windows or Cmd+V on Mac)
|
||||||
|
3. Click "File" then "Save" to save the code in your file
|
||||||
|
|
||||||
|
### Step 5: Run the Program in PowerShell
|
||||||
|
1. Open PowerShell (on Windows) or Terminal (on Mac)
|
||||||
|
2. Navigate to the folder where you saved your jokes.py file
|
||||||
|
- Type: `cd ` followed by the path to your folder
|
||||||
|
3. Create a virtual environment (this keeps your programs organized):
|
||||||
|
- Type: `python -m venv venv`
|
||||||
|
4. Activate the virtual environment:
|
||||||
|
- On Windows: `venv\Scripts\Activate.ps1`
|
||||||
|
- On Mac: `source venv/bin/activate`
|
||||||
|
5. Run the program:
|
||||||
|
- Type: `python jokes.py`
|
||||||
|
|
||||||
|
### Step 6: Using the Joke Bot
|
||||||
|
1. When the program runs, you'll see a menu with options
|
||||||
|
2. Press "1" to get a random joke
|
||||||
|
3. Press "2" to add your own joke
|
||||||
|
4. Press "3" to quit the program
|
||||||
|
|
||||||
|
## Simple Explanation for New Programmers
|
||||||
|
|
||||||
|
Think of programming like cooking:
|
||||||
|
- The code is like a recipe
|
||||||
|
- Python is like your kitchen tools
|
||||||
|
- The computer is like your kitchen where everything happens
|
||||||
|
|
||||||
|
When you run the Joke Bot:
|
||||||
|
- The program automatically creates a database (like a notebook) to store jokes
|
||||||
|
- If there are no jokes in the notebook, it adds some sample jokes for you
|
||||||
|
- Then it shows you the menu so you can interact with it
|
||||||
|
|
||||||
|
The program is smart enough to handle everything by itself - you don't need to worry about databases or SQL statements. Just run it and enjoy the jokes!
|
||||||
|
|
||||||
|
Have fun with your Joke Bot! 😄
|
||||||
|
|||||||
BIN
jokes_bot/v4.0/__pycache__/jokes.cpython-314.pyc
Normal file
BIN
jokes_bot/v4.0/__pycache__/jokes.cpython-314.pyc
Normal file
Binary file not shown.
@@ -1,67 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Simple script to check the content of the jokes database
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sqlite3
|
|
||||||
import os
|
|
||||||
|
|
||||||
def check_database():
|
|
||||||
db_path = 'jokes.db'
|
|
||||||
|
|
||||||
if not os.path.exists(db_path):
|
|
||||||
print(f"❌ Database {db_path} does not exist!")
|
|
||||||
return
|
|
||||||
|
|
||||||
print(f"🔍 Checking database: {db_path}")
|
|
||||||
|
|
||||||
conn = sqlite3.connect(db_path)
|
|
||||||
cursor = conn.cursor()
|
|
||||||
|
|
||||||
# Count the number of jokes in the database
|
|
||||||
cursor.execute('SELECT COUNT(*) FROM jokes')
|
|
||||||
count = cursor.fetchone()[0]
|
|
||||||
print(f"📊 Total jokes in database: {count}")
|
|
||||||
|
|
||||||
# Count the number of user sentiments
|
|
||||||
cursor.execute('SELECT COUNT(*) FROM user_sentiments')
|
|
||||||
sentiment_count = cursor.fetchone()[0]
|
|
||||||
print(f"📊 Total user sentiments recorded: {sentiment_count}")
|
|
||||||
|
|
||||||
# If there are jokes, show a few of them
|
|
||||||
if count > 0:
|
|
||||||
cursor.execute('''
|
|
||||||
SELECT j.id, j.joke, j.contributor, j.sentiment_label,
|
|
||||||
(SELECT COUNT(*) FROM user_sentiments us WHERE us.joke_id = j.id) as sentiment_count
|
|
||||||
FROM jokes j
|
|
||||||
LIMIT 5
|
|
||||||
''')
|
|
||||||
jokes = cursor.fetchall()
|
|
||||||
print('\n📋 Sample of jokes in the database:')
|
|
||||||
for i, (joke_id, joke, contributor, sentiment, sentiment_count) in enumerate(jokes, 1):
|
|
||||||
print(f'{i:2d}. [{sentiment}] {joke[:60]}...')
|
|
||||||
print(f' 👤 {contributor} | {sentiment_count} user ratings')
|
|
||||||
|
|
||||||
# Show AI sentiment distribution
|
|
||||||
cursor.execute('SELECT sentiment_label, COUNT(*) FROM jokes GROUP BY sentiment_label')
|
|
||||||
distribution = cursor.fetchall()
|
|
||||||
print(f'\n📈 AI Sentiment distribution:')
|
|
||||||
for label, cnt in distribution:
|
|
||||||
print(f' {label}: {cnt} jokes')
|
|
||||||
|
|
||||||
# Show user sentiment distribution if any exist
|
|
||||||
if sentiment_count > 0:
|
|
||||||
cursor.execute('SELECT user_sentiment, COUNT(*) FROM user_sentiments GROUP BY user_sentiment')
|
|
||||||
user_distribution = cursor.fetchall()
|
|
||||||
print(f'\n👥 User Sentiment distribution:')
|
|
||||||
for sentiment, cnt in user_distribution:
|
|
||||||
emoji = {'up': '👍', 'down': '👎', 'neutral': '😐'}[sentiment]
|
|
||||||
print(f' {emoji} {sentiment.capitalize()}: {cnt} ratings')
|
|
||||||
else:
|
|
||||||
print("\n📭 No jokes found in the database!")
|
|
||||||
print("💡 Run populate_db.py to add sample jokes to the database.")
|
|
||||||
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
check_database()
|
|
||||||
@@ -1,22 +1,108 @@
|
|||||||
-- Insert 20 dummy jokes with various sentiments
|
-- Enhanced Joke Bot Database Schema with User Identification
|
||||||
INSERT INTO jokes (joke, contributor, created_date, approved, sentiment_score, sentiment_label) VALUES
|
-- Version 4.0 - User Analytics Edition
|
||||||
('Why don''t scientists trust atoms? Because they make up everything!', 'ScienceFan', '2024-01-15 10:30:00', 1, 0.75, '😊 Positive'),
|
|
||||||
('I told my wife she was drawing her eyebrows too high. She looked surprised.', 'Joker123', '2024-01-16 14:20:00', 1, 0.35, '😊 Positive'),
|
-- Drop existing tables if they exist (clean slate)
|
||||||
('Why did the scarecrow win an award? He was outstanding in his field!', 'FarmLife', '2024-01-17 09:15:00', 1, 0.65, '😊 Positive'),
|
DROP TABLE IF EXISTS user_sentiments;
|
||||||
('What do you call a fish with no eyes? Fsh!', 'MarineBio', '2024-01-18 16:45:00', 1, 0.25, '😊 Positive'),
|
DROP TABLE IF EXISTS jokes;
|
||||||
('I''m reading a book on anti-gravity. It''s impossible to put down!', 'PhysicsNerd', '2024-01-19 11:30:00', 1, 0.45, '😊 Positive'),
|
|
||||||
('Why did the computer go to the doctor? Because it had a virus.', 'TechSupport', '2024-01-20 13:10:00', 1, 0.05, '😐 Neutral'),
|
-- Create jokes table
|
||||||
('What do you call a bear with no teeth? A gummy bear.', 'WildlifeFan', '2024-01-21 15:25:00', 1, 0.08, '😐 Neutral'),
|
CREATE TABLE jokes (
|
||||||
('Why did the bicycle fall over? Because it was two-tired.', 'Cyclist', '2024-01-22 10:00:00', 1, -0.02, '😐 Neutral'),
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
('What do you call a sleeping bull? A bulldozer.', 'Cowboy', '2024-01-23 14:35:00', 1, 0.03, '😐 Neutral'),
|
joke TEXT NOT NULL,
|
||||||
('Why did the math book look so sad? Because it had too many problems.', 'Student', '2024-01-24 09:50:00', 1, -0.05, '😐 Neutral'),
|
contributor TEXT NOT NULL,
|
||||||
('I used to play piano by ear, but now I use my hands.', 'Musician', '2024-01-25 12:15:00', 1, -0.15, '😒 Negative'),
|
created_date TEXT NOT NULL,
|
||||||
('I told my computer I needed a break, and now it won''t stop sending me Kit-Kat ads.', 'OfficeWorker', '2024-01-26 16:30:00', 1, -0.25, '😒 Negative'),
|
approved BOOLEAN DEFAULT 0,
|
||||||
('Parallel lines have so much in common. It''s a shame they''ll never meet.', 'MathTeacher', '2024-01-27 11:40:00', 1, -0.35, '😒 Negative'),
|
sentiment_score REAL DEFAULT 0.0,
|
||||||
('My wife told me to stop impersonating a flamingo. I had to put my foot down.', 'Husband', '2024-01-28 14:55:00', 1, -0.20, '😒 Negative'),
|
sentiment_label TEXT DEFAULT '😐 Neutral'
|
||||||
('I told my girlfriend she drew her eyebrows too high. She seemed surprised.', 'Boyfriend', '2024-01-29 10:10:00', 1, -0.30, '😒 Negative'),
|
);
|
||||||
('What''s orange and sounds like a parrot? A carrot!', 'Vegetarian', '2024-01-30 13:20:00', 1, 0.85, '😊 Positive'),
|
|
||||||
('Why don''t eggs tell jokes? They''d crack each other up!', 'Chef', '2024-01-31 15:45:00', 1, 0.90, '😊 Positive'),
|
-- Create user_sentiments table with user identification
|
||||||
('I invented a new word: Plagiarism!', 'Writer', '2024-02-01 09:30:00', 1, 0.78, '😊 Positive'),
|
CREATE TABLE user_sentiments (
|
||||||
('Why did the golfer bring two pairs of pants? In case he got a hole in one!', 'Golfer', '2024-02-02 12:15:00', 1, 0.82, '😊 Positive'),
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
('What do you call a fake noodle? An impasta!', 'ItalianFood', '2024-02-03 14:40:00', 1, 0.88, '😊 Positive');
|
joke_id INTEGER NOT NULL,
|
||||||
|
user_sentiment TEXT CHECK(user_sentiment IN ('up', 'down', 'neutral')) DEFAULT 'neutral',
|
||||||
|
user_identifier TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (joke_id) REFERENCES jokes(id) ON DELETE CASCADE
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Insert sample jokes with approval status
|
||||||
|
INSERT INTO jokes (joke, contributor, created_date, approved) VALUES
|
||||||
|
('Why don''t scientists trust atoms? Because they make up everything!', 'ScienceFan', '2024-01-15 10:30:00', 1),
|
||||||
|
('I told my wife she was drawing her eyebrows too high. She looked surprised.', 'Joker123', '2024-01-16 14:20:00', 1),
|
||||||
|
('Why did the scarecrow win an award? He was outstanding in his field!', 'FarmLife', '2024-01-17 09:15:00', 1),
|
||||||
|
('What do you call a fish with no eyes? Fsh!', 'MarineBio', '2024-01-18 16:45:00', 1),
|
||||||
|
('I''m reading a book on anti-gravity. It''s impossible to put down!', 'PhysicsNerd', '2024-01-19 11:30:00', 1),
|
||||||
|
('Why did the computer go to the doctor? Because it had a virus.', 'TechSupport', '2024-01-20 13:10:00', 1),
|
||||||
|
('What do you call a bear with no teeth? A gummy bear.', 'WildlifeFan', '2024-01-21 15:25:00', 1),
|
||||||
|
('Why did the bicycle fall over? Because it was two-tired.', 'Cyclist', '2024-01-22 10:00:00', 1),
|
||||||
|
('What do you call a sleeping bull? A bulldozer.', 'Cowboy', '2024-01-23 14:35:00', 1),
|
||||||
|
('Why did the math book look so sad? Because it had too many problems.', 'Student', '2024-01-24 09:50:00', 1);
|
||||||
|
|
||||||
|
-- Insert sample user sentiments with user identifiers
|
||||||
|
INSERT INTO user_sentiments (joke_id, user_sentiment, user_identifier) VALUES
|
||||||
|
(1, 'up', 'ComedyFan'),
|
||||||
|
(1, 'up', 'JokeLover'),
|
||||||
|
(1, 'neutral', 'CritiqueMaster'),
|
||||||
|
(1, 'up', 'FunnyPerson'),
|
||||||
|
(2, 'down', 'SeriousReader'),
|
||||||
|
(2, 'up', 'HappyViewer'),
|
||||||
|
(2, 'neutral', 'NeutralObserver'),
|
||||||
|
(3, 'up', 'ComedyFan'),
|
||||||
|
(3, 'up', 'JokeLover'),
|
||||||
|
(3, 'up', 'FunnyPerson'),
|
||||||
|
(3, 'down', 'CritiqueMaster'),
|
||||||
|
(4, 'neutral', 'FishExpert'),
|
||||||
|
(4, 'down', 'GrammarNazi'),
|
||||||
|
(4, 'up', 'PunLover'),
|
||||||
|
(5, 'up', 'ScienceGeek'),
|
||||||
|
(5, 'up', 'BookWorm'),
|
||||||
|
(5, 'neutral', 'Skeptic'),
|
||||||
|
(6, 'up', 'TechEnthusiast'),
|
||||||
|
(6, 'down', 'ComputerHater'),
|
||||||
|
(7, 'up', 'AnimalLover'),
|
||||||
|
(7, 'up', 'WordPlayFan'),
|
||||||
|
(8, 'up', 'CyclingFan'),
|
||||||
|
(8, 'neutral', 'BikeNovice'),
|
||||||
|
(9, 'up', 'FarmKid'),
|
||||||
|
(9, 'down', 'CitySlicker'),
|
||||||
|
(10, 'neutral', 'MathStudent'),
|
||||||
|
(10, 'up', 'ProblemSolver');
|
||||||
|
|
||||||
|
-- Verification queries
|
||||||
|
SELECT '✅ Database setup complete!' as status;
|
||||||
|
|
||||||
|
-- Show joke counts
|
||||||
|
SELECT 'Total jokes in database:' as info, COUNT(*) as count FROM jokes;
|
||||||
|
|
||||||
|
-- Show sentiment distribution
|
||||||
|
SELECT
|
||||||
|
'Sentiment Distribution:' as info,
|
||||||
|
user_sentiment,
|
||||||
|
COUNT(*) as count
|
||||||
|
FROM user_sentiments
|
||||||
|
GROUP BY user_sentiment
|
||||||
|
ORDER BY count DESC;
|
||||||
|
|
||||||
|
-- Show most active users
|
||||||
|
SELECT
|
||||||
|
'Most Active Users:' as info,
|
||||||
|
user_identifier,
|
||||||
|
COUNT(*) as ratings_given
|
||||||
|
FROM user_sentiments
|
||||||
|
GROUP BY user_identifier
|
||||||
|
ORDER BY ratings_given DESC
|
||||||
|
LIMIT 5;
|
||||||
|
|
||||||
|
-- Show community positivity by joke
|
||||||
|
SELECT
|
||||||
|
'Community Positivity by Joke:' as info,
|
||||||
|
j.joke,
|
||||||
|
ROUND(AVG(CASE WHEN us.user_sentiment = 'up' THEN 1.0
|
||||||
|
WHEN us.user_sentiment = 'down' THEN 0.0
|
||||||
|
ELSE 0.5 END) * 100, 1) as positivity_percent,
|
||||||
|
COUNT(*) as total_ratings
|
||||||
|
FROM jokes j
|
||||||
|
JOIN user_sentiments us ON j.id = us.joke_id
|
||||||
|
GROUP BY j.id, j.joke
|
||||||
|
ORDER BY positivity_percent DESC;
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
# database.py
|
|
||||||
import sqlite3
|
|
||||||
|
|
||||||
conn = sqlite3.connect('jokes.db')
|
|
||||||
cursor = conn.cursor()
|
|
||||||
|
|
||||||
cursor.execute('''CREATE TABLE IF NOT EXISTS jokes (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
joke TEXT NOT NULL,
|
|
||||||
contributor TEXT NOT NULL,
|
|
||||||
created_date TEXT NOT NULL,
|
|
||||||
approved BOOLEAN DEFAULT 0,
|
|
||||||
sentiment_score REAL DEFAULT 0.0,
|
|
||||||
sentiment_label TEXT DEFAULT '😐 Neutral'
|
|
||||||
)''')
|
|
||||||
|
|
||||||
# Create a new table to store user sentiments for each joke
|
|
||||||
cursor.execute('''CREATE TABLE IF NOT EXISTS user_sentiments (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
joke_id INTEGER NOT NULL,
|
|
||||||
user_sentiment TEXT CHECK(user_sentiment IN ('up', 'down', 'neutral')) DEFAULT 'neutral',
|
|
||||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
|
||||||
FOREIGN KEY (joke_id) REFERENCES jokes(id) ON DELETE CASCADE
|
|
||||||
)''')
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
print("✅ Database and tables created successfully with approval system and user sentiment tracking!")
|
|
||||||
conn.close()
|
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
# debug_db.py
|
|
||||||
import sqlite3
|
|
||||||
import os
|
|
||||||
|
|
||||||
def check_database():
|
|
||||||
print("🔍 DATABASE DEBUG CHECK")
|
|
||||||
print("=" * 40)
|
|
||||||
|
|
||||||
# Check current directory
|
|
||||||
print(f"📁 Current directory: {os.getcwd()}")
|
|
||||||
print(f"📁 Database file exists: {os.path.exists('jokes.db')}")
|
|
||||||
|
|
||||||
if not os.path.exists('jokes.db'):
|
|
||||||
print("❌ ERROR: jokes.db file not found in current directory!")
|
|
||||||
print("💡 Try running: python3 database.py first")
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Connect to database
|
|
||||||
conn = sqlite3.connect('jokes.db')
|
|
||||||
cursor = conn.cursor()
|
|
||||||
|
|
||||||
print("✅ Connected to database successfully")
|
|
||||||
|
|
||||||
# Check what tables exist
|
|
||||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
|
|
||||||
tables = cursor.fetchall()
|
|
||||||
print(f"\n📋 Tables found: {[table[0] for table in tables]}")
|
|
||||||
|
|
||||||
if 'jokes' not in [table[0] for table in tables]:
|
|
||||||
print("❌ ERROR: 'jokes' table not found!")
|
|
||||||
print("💡 Try running: python3 database.py to create the table")
|
|
||||||
conn.close()
|
|
||||||
return
|
|
||||||
|
|
||||||
# Check number of jokes
|
|
||||||
cursor.execute("SELECT COUNT(*) FROM jokes")
|
|
||||||
count = cursor.fetchone()[0]
|
|
||||||
print(f"📊 Total jokes in database: {count}")
|
|
||||||
|
|
||||||
if count == 0:
|
|
||||||
print("⚠️ WARNING: Database is empty!")
|
|
||||||
print("💡 Load sample data with: sqlite3 jokes.db < sample_data.sql")
|
|
||||||
else:
|
|
||||||
# Show some sample data
|
|
||||||
print("\n🔍 Sample jokes (first 3):")
|
|
||||||
print("-" * 50)
|
|
||||||
cursor.execute("SELECT id, joke, contributor, sentiment_label FROM jokes LIMIT 3")
|
|
||||||
jokes = cursor.fetchall()
|
|
||||||
|
|
||||||
for joke in jokes:
|
|
||||||
print(f"\nID: {joke[0]}")
|
|
||||||
print(f"Joke: {joke[1]}")
|
|
||||||
print(f"Contributor: {joke[2]}")
|
|
||||||
print(f"Mood: {joke[3]}")
|
|
||||||
print("-" * 30)
|
|
||||||
|
|
||||||
# Check column names
|
|
||||||
print("\n📝 Table structure:")
|
|
||||||
cursor.execute("PRAGMA table_info(jokes)")
|
|
||||||
columns = cursor.fetchall()
|
|
||||||
for col in columns:
|
|
||||||
print(f" {col[1]} ({col[2]})")
|
|
||||||
|
|
||||||
conn.close()
|
|
||||||
print("\n✅ Debug check completed!")
|
|
||||||
|
|
||||||
except sqlite3.Error as e:
|
|
||||||
print(f"❌ SQLite error: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ General error: {e}")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
check_database()
|
|
||||||
107
jokes_bot/v4.0/demo_features.py
Normal file
107
jokes_bot/v4.0/demo_features.py
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Demo script to showcase the enhanced user identification and analytics features
|
||||||
|
of the Joke Bot v4.0
|
||||||
|
"""
|
||||||
|
|
||||||
|
import sqlite3
|
||||||
|
import os
|
||||||
|
|
||||||
|
def demo_user_identification():
|
||||||
|
"""Demonstrate the new user identification features"""
|
||||||
|
print("🎭 Joke Bot v4.0 - User Identification Demo 🎭")
|
||||||
|
print("=" * 50)
|
||||||
|
|
||||||
|
# Import the main module functions
|
||||||
|
import sys
|
||||||
|
sys.path.append('.')
|
||||||
|
|
||||||
|
# Reinitialize database for demo
|
||||||
|
if os.path.exists('jokes.db'):
|
||||||
|
os.remove('jokes.db')
|
||||||
|
print("🗑️ Cleaned existing database for demo")
|
||||||
|
|
||||||
|
from jokes import initialize_database, add_user_sentiment, get_user_sentiment_for_joke
|
||||||
|
from jokes import get_detailed_sentiment_stats, get_user_sentiment_history
|
||||||
|
from jokes import get_popular_jokes_by_user, get_top_users_by_joke_preference
|
||||||
|
|
||||||
|
# Initialize database
|
||||||
|
print("\n🔧 Initializing database with enhanced schema...")
|
||||||
|
initialize_database()
|
||||||
|
|
||||||
|
db = sqlite3.connect('jokes.db')
|
||||||
|
|
||||||
|
print("\n🎯 Demonstrating User Identification Features:")
|
||||||
|
print("-" * 40)
|
||||||
|
|
||||||
|
# Simulate different users rating the same joke
|
||||||
|
joke_id = 1 # First joke in our sample data
|
||||||
|
test_users = ['Alice', 'Bob', 'Charlie', 'Diana', 'Eve']
|
||||||
|
test_ratings = ['up', 'up', 'down', 'up', 'neutral']
|
||||||
|
|
||||||
|
print(f"\n📋 Simulating ratings for joke #{joke_id}:")
|
||||||
|
for user, rating in zip(test_users, test_ratings):
|
||||||
|
success, action = add_user_sentiment(db, joke_id, rating, user)
|
||||||
|
rating_emoji = {'up': '👍', 'down': '👎', 'neutral': '😐'}[rating]
|
||||||
|
print(f" {rating_emoji} {user} rated: {rating} ({action})")
|
||||||
|
|
||||||
|
# Show community sentiment
|
||||||
|
avg_sentiment, total_votes = get_user_sentiment_for_joke(db, joke_id)
|
||||||
|
print(f"\n👥 Community Rating: {avg_sentiment} ({total_votes} votes)")
|
||||||
|
|
||||||
|
# Show detailed breakdown
|
||||||
|
detailed_stats = get_detailed_sentiment_stats(db, joke_id)
|
||||||
|
print("🔍 Detailed Breakdown:")
|
||||||
|
for sentiment, count, users in detailed_stats:
|
||||||
|
sentiment_emoji = {'up': '👍', 'down': '👎', 'neutral': '😐'}[sentiment]
|
||||||
|
print(f" {sentiment_emoji} {sentiment.capitalize()}: {count} votes by {users}")
|
||||||
|
|
||||||
|
print("\n👤 Individual User Analytics:")
|
||||||
|
print("-" * 30)
|
||||||
|
|
||||||
|
# Show Alice's rating history
|
||||||
|
alice_history = get_user_sentiment_history(db, 'Alice')
|
||||||
|
print(f"\n📝 Alice's Rating History ({len(alice_history)} ratings):")
|
||||||
|
for joke_id, joke_text, sentiment, created_at in alice_history:
|
||||||
|
sentiment_emoji = {'up': '👍', 'down': '👎', 'neutral': '😐'}[sentiment]
|
||||||
|
print(f" {sentiment_emoji} {joke_text[:40]}...")
|
||||||
|
|
||||||
|
# Show Bob's favorite jokes
|
||||||
|
bob_favorites = get_popular_jokes_by_user(db, 'Bob')
|
||||||
|
print(f"\n😄 Bob's Favorite Jokes ({len(bob_favorites)} likes):")
|
||||||
|
for joke_id, joke_text, contributor, sentiment, created_at in bob_favorites:
|
||||||
|
print(f" 👍 {joke_text[:50]}...")
|
||||||
|
print(f" By: {contributor}")
|
||||||
|
|
||||||
|
print("\n🏆 Community Leaderboard:")
|
||||||
|
print("-" * 25)
|
||||||
|
|
||||||
|
# Show top users by positivity
|
||||||
|
top_users = get_top_users_by_joke_preference(db)
|
||||||
|
if top_users:
|
||||||
|
for i, (username, total, positive, negative, positivity_pct) in enumerate(top_users, 1):
|
||||||
|
print(f"{i}. {username}: {positivity_pct}% positive ({positive}/{total} ratings)")
|
||||||
|
|
||||||
|
# Test duplicate rating prevention
|
||||||
|
print(f"\n🔄 Testing Rating Update Feature:")
|
||||||
|
print("-" * 35)
|
||||||
|
success, action = add_user_sentiment(db, joke_id, 'down', 'Alice') # Change Alice's rating
|
||||||
|
print(f"Alice changed her rating: {action}")
|
||||||
|
|
||||||
|
# Show updated stats
|
||||||
|
avg_sentiment, total_votes = get_user_sentiment_for_joke(db, joke_id)
|
||||||
|
print(f"Updated Community Rating: {avg_sentiment} ({total_votes} votes)")
|
||||||
|
|
||||||
|
db.close()
|
||||||
|
|
||||||
|
print("\n🎉 Demo Complete!")
|
||||||
|
print("✨ Key Features Demonstrated:")
|
||||||
|
print(" • User identification for each sentiment")
|
||||||
|
print(" • Personal rating history tracking")
|
||||||
|
print(" • Favorite joke identification")
|
||||||
|
print(" • Community analytics and leaderboards")
|
||||||
|
print(" • Duplicate rating prevention with update capability")
|
||||||
|
print(" • Detailed sentiment breakdowns")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
demo_user_identification()
|
||||||
Binary file not shown.
@@ -3,6 +3,75 @@ import sqlite3
|
|||||||
import random
|
import random
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
|
def initialize_database():
|
||||||
|
"""Create the database and tables if they don't exist"""
|
||||||
|
conn = sqlite3.connect('jokes.db')
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
# Create jokes table
|
||||||
|
cursor.execute('''CREATE TABLE IF NOT EXISTS jokes (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
joke TEXT NOT NULL,
|
||||||
|
contributor TEXT NOT NULL,
|
||||||
|
created_date TEXT NOT NULL,
|
||||||
|
approved BOOLEAN DEFAULT 0,
|
||||||
|
sentiment_score REAL DEFAULT 0.0,
|
||||||
|
sentiment_label TEXT DEFAULT '😐 Neutral'
|
||||||
|
)''')
|
||||||
|
|
||||||
|
# Create user_sentiments table with user identifier
|
||||||
|
cursor.execute('''CREATE TABLE IF NOT EXISTS user_sentiments (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
joke_id INTEGER NOT NULL,
|
||||||
|
user_sentiment TEXT CHECK(user_sentiment IN ('up', 'down', 'neutral')) DEFAULT 'neutral',
|
||||||
|
user_identifier TEXT NOT NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
FOREIGN KEY (joke_id) REFERENCES jokes(id) ON DELETE CASCADE
|
||||||
|
)''')
|
||||||
|
|
||||||
|
# Check if jokes table is empty
|
||||||
|
cursor.execute('SELECT COUNT(*) FROM jokes')
|
||||||
|
joke_count = cursor.fetchone()[0]
|
||||||
|
|
||||||
|
if joke_count == 0:
|
||||||
|
# Insert sample jokes if table is empty
|
||||||
|
sample_jokes = [
|
||||||
|
('Why don\'t scientists trust atoms? Because they make up everything!', 'ScienceFan', '2024-01-15 10:30:00', 1),
|
||||||
|
('I told my wife she was drawing her eyebrows too high. She looked surprised.', 'Joker123', '2024-01-16 14:20:00', 1),
|
||||||
|
('Why did the scarecrow win an award? He was outstanding in his field!', 'FarmLife', '2024-01-17 09:15:00', 1),
|
||||||
|
('What do you call a fish with no eyes? Fsh!', 'MarineBio', '2024-01-18 16:45:00', 1),
|
||||||
|
('I\'m reading a book on anti-gravity. It\'s impossible to put down!', 'PhysicsNerd', '2024-01-19 11:30:00', 1),
|
||||||
|
('Why did the computer go to the doctor? Because it had a virus.', 'TechSupport', '2024-01-20 13:10:00', 1),
|
||||||
|
('What do you call a bear with no teeth? A gummy bear.', 'WildlifeFan', '2024-01-21 15:25:00', 1),
|
||||||
|
('Why did the bicycle fall over? Because it was two-tired.', 'Cyclist', '2024-01-22 10:00:00', 1),
|
||||||
|
('What do you call a sleeping bull? A bulldozer.', 'Cowboy', '2024-01-23 14:35:00', 1),
|
||||||
|
('Why did the math book look so sad? Because it had too many problems.', 'Student', '2024-01-24 09:50:00', 1)
|
||||||
|
]
|
||||||
|
|
||||||
|
cursor.executemany('''
|
||||||
|
INSERT INTO jokes (joke, contributor, created_date, approved)
|
||||||
|
VALUES (?, ?, ?, ?)
|
||||||
|
''', sample_jokes)
|
||||||
|
|
||||||
|
# Add some sample user sentiments with user identifiers
|
||||||
|
sample_sentiments = [
|
||||||
|
(1, 'up', 'User123'), (1, 'up', 'FunnyPerson'), (1, 'neutral', 'JokeLover'),
|
||||||
|
(2, 'down', 'CritiqueMaster'), (2, 'up', 'HappyViewer'),
|
||||||
|
(3, 'up', 'User123'), (3, 'up', 'FunnyPerson'), (3, 'up', 'ComedyFan')
|
||||||
|
]
|
||||||
|
|
||||||
|
cursor.executemany('''
|
||||||
|
INSERT INTO user_sentiments (joke_id, user_sentiment, user_identifier)
|
||||||
|
VALUES (?, ?, ?)
|
||||||
|
''', sample_sentiments)
|
||||||
|
|
||||||
|
conn.commit()
|
||||||
|
print(f"✅ Database initialized with {len(sample_jokes)} sample jokes and {len(sample_sentiments)} sample sentiments!")
|
||||||
|
else:
|
||||||
|
print(f"✅ Connected to database with {joke_count} jokes already present.")
|
||||||
|
|
||||||
|
conn.close()
|
||||||
|
|
||||||
def get_user_sentiment_for_joke(db, joke_id):
|
def get_user_sentiment_for_joke(db, joke_id):
|
||||||
"""Get the average user sentiment for a specific joke"""
|
"""Get the average user sentiment for a specific joke"""
|
||||||
cursor = db.execute('''
|
cursor = db.execute('''
|
||||||
@@ -25,29 +94,127 @@ def get_user_sentiment_for_joke(db, joke_id):
|
|||||||
avg_sentiment, total_votes = result if result else ('😐 Neutral', 0)
|
avg_sentiment, total_votes = result if result else ('😐 Neutral', 0)
|
||||||
return avg_sentiment, total_votes
|
return avg_sentiment, total_votes
|
||||||
|
|
||||||
def add_user_sentiment(db, joke_id, user_choice):
|
def get_detailed_sentiment_stats(db, joke_id):
|
||||||
"""Add user sentiment for a specific joke"""
|
"""Get detailed sentiment statistics including user breakdown"""
|
||||||
|
cursor = db.execute('''
|
||||||
|
SELECT
|
||||||
|
user_sentiment,
|
||||||
|
COUNT(*) as count,
|
||||||
|
GROUP_CONCAT(user_identifier) as users
|
||||||
|
FROM user_sentiments
|
||||||
|
WHERE joke_id = ?
|
||||||
|
GROUP BY user_sentiment
|
||||||
|
ORDER BY count DESC
|
||||||
|
''', (joke_id,))
|
||||||
|
|
||||||
|
return cursor.fetchall()
|
||||||
|
|
||||||
|
def get_user_sentiment_history(db, user_identifier):
|
||||||
|
"""Get sentiment history for a specific user"""
|
||||||
|
cursor = db.execute('''
|
||||||
|
SELECT
|
||||||
|
j.id,
|
||||||
|
j.joke,
|
||||||
|
us.user_sentiment,
|
||||||
|
us.created_at
|
||||||
|
FROM user_sentiments us
|
||||||
|
JOIN jokes j ON us.joke_id = j.id
|
||||||
|
WHERE us.user_identifier = ?
|
||||||
|
ORDER BY us.created_at DESC
|
||||||
|
''', (user_identifier,))
|
||||||
|
|
||||||
|
return cursor.fetchall()
|
||||||
|
|
||||||
|
def get_popular_jokes_by_user(db, user_identifier):
|
||||||
|
"""Get jokes that a specific user has rated highly"""
|
||||||
|
cursor = db.execute('''
|
||||||
|
SELECT
|
||||||
|
j.id,
|
||||||
|
j.joke,
|
||||||
|
j.contributor,
|
||||||
|
us.user_sentiment,
|
||||||
|
us.created_at
|
||||||
|
FROM user_sentiments us
|
||||||
|
JOIN jokes j ON us.joke_id = j.id
|
||||||
|
WHERE us.user_identifier = ? AND us.user_sentiment = 'up'
|
||||||
|
ORDER BY us.created_at DESC
|
||||||
|
''', (user_identifier,))
|
||||||
|
|
||||||
|
return cursor.fetchall()
|
||||||
|
|
||||||
|
def add_user_sentiment(db, joke_id, user_choice, user_identifier):
|
||||||
|
"""Add user sentiment for a specific joke with user identification"""
|
||||||
try:
|
try:
|
||||||
|
# Check if user already rated this joke
|
||||||
|
cursor = db.execute('''
|
||||||
|
SELECT id FROM user_sentiments
|
||||||
|
WHERE joke_id = ? AND user_identifier = ?
|
||||||
|
''', (joke_id, user_identifier))
|
||||||
|
|
||||||
|
existing_rating = cursor.fetchone()
|
||||||
|
|
||||||
|
if existing_rating:
|
||||||
|
# Update existing rating
|
||||||
db.execute('''
|
db.execute('''
|
||||||
INSERT INTO user_sentiments (joke_id, user_sentiment)
|
UPDATE user_sentiments
|
||||||
VALUES (?, ?)
|
SET user_sentiment = ?, created_at = CURRENT_TIMESTAMP
|
||||||
''', (joke_id, user_choice))
|
WHERE joke_id = ? AND user_identifier = ?
|
||||||
|
''', (user_choice, joke_id, user_identifier))
|
||||||
|
action = "updated"
|
||||||
|
else:
|
||||||
|
# Insert new rating
|
||||||
|
db.execute('''
|
||||||
|
INSERT INTO user_sentiments (joke_id, user_sentiment, user_identifier)
|
||||||
|
VALUES (?, ?, ?)
|
||||||
|
''', (joke_id, user_choice, user_identifier))
|
||||||
|
action = "recorded"
|
||||||
|
|
||||||
db.commit()
|
db.commit()
|
||||||
return True
|
return True, action
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"❌ Error saving sentiment: {e}")
|
print(f"❌ Error saving sentiment: {e}")
|
||||||
return False
|
return False, None
|
||||||
|
|
||||||
|
def get_top_users_by_joke_preference(db):
|
||||||
|
"""Get analytics on which users prefer which types of jokes"""
|
||||||
|
cursor = db.execute('''
|
||||||
|
SELECT
|
||||||
|
user_identifier,
|
||||||
|
COUNT(*) as total_ratings,
|
||||||
|
SUM(CASE WHEN user_sentiment = 'up' THEN 1 ELSE 0 END) as positive_ratings,
|
||||||
|
SUM(CASE WHEN user_sentiment = 'down' THEN 1 ELSE 0 END) as negative_ratings,
|
||||||
|
ROUND(AVG(CASE WHEN user_sentiment = 'up' THEN 1
|
||||||
|
WHEN user_sentiment = 'down' THEN 0
|
||||||
|
ELSE 0.5 END) * 100, 1) as positivity_percentage
|
||||||
|
FROM user_sentiments
|
||||||
|
GROUP BY user_identifier
|
||||||
|
HAVING COUNT(*) >= 2
|
||||||
|
ORDER BY positivity_percentage DESC, total_ratings DESC
|
||||||
|
''')
|
||||||
|
|
||||||
|
return cursor.fetchall()
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
# Initialize database and tables on startup
|
||||||
|
initialize_database()
|
||||||
|
|
||||||
db = sqlite3.connect('jokes.db')
|
db = sqlite3.connect('jokes.db')
|
||||||
|
|
||||||
|
# Get user identifier for session
|
||||||
|
print("🤖 Welcome to Joke Bot! 🤖")
|
||||||
|
user_identifier = input("Please enter your username (or press Enter for 'Guest'): ").strip() or "Guest"
|
||||||
|
print(f"👤 Logged in as: {user_identifier}")
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
print("\n" + "="*30)
|
print("\n" + "="*40)
|
||||||
print("🤖 JOKE BOT 🤖")
|
print("🤖 JOKE BOT MENU 🤖")
|
||||||
print("="*30)
|
print("="*40)
|
||||||
print("1. Get random joke")
|
print("1. Get random joke")
|
||||||
print("2. Add new joke")
|
print("2. Add new joke")
|
||||||
print("3. Quit")
|
print("3. View your joke ratings")
|
||||||
|
print("4. See what jokes you liked")
|
||||||
|
print("5. View community analytics")
|
||||||
|
print("6. Quit")
|
||||||
|
|
||||||
choice = input("\nYour choice: ").strip()
|
choice = input("\nYour choice: ").strip()
|
||||||
|
|
||||||
@@ -72,17 +239,27 @@ def main():
|
|||||||
if total_votes > 0:
|
if total_votes > 0:
|
||||||
print(f" 👥 Community Rating: {avg_sentiment} ({total_votes} votes)")
|
print(f" 👥 Community Rating: {avg_sentiment} ({total_votes} votes)")
|
||||||
|
|
||||||
|
# Show detailed breakdown
|
||||||
|
detailed_stats = get_detailed_sentiment_stats(db, joke_id)
|
||||||
|
if detailed_stats:
|
||||||
|
print(" 🔍 Detailed Breakdown:")
|
||||||
|
for sentiment, count, users in detailed_stats:
|
||||||
|
sentiment_icon = {'up': '👍', 'down': '👎', 'neutral': '😐'}[sentiment]
|
||||||
|
print(f" {sentiment_icon} {sentiment.capitalize()}: {count} votes")
|
||||||
|
|
||||||
# Ask user for their sentiment
|
# Ask user for their sentiment
|
||||||
print(f"\n🎯 Rate this joke: 👍 (U)p, 👎 (D)own, or (N)eutral?")
|
print(f"\n🎯 Rate this joke: 👍 (U)p, 👎 (D)own, or (N)eutral?")
|
||||||
user_input = input("Your choice (u/d/n): ").strip().lower()
|
user_input = input("Your choice (u/d/n): ").strip().lower()
|
||||||
user_choice = 'up' if user_input in ['u', 'up'] else 'down' if user_input in ['d', 'down'] else 'neutral'
|
user_choice = 'up' if user_input in ['u', 'up'] else 'down' if user_input in ['d', 'down'] else 'neutral'
|
||||||
|
|
||||||
if add_user_sentiment(db, joke_id, user_choice):
|
success, action = add_user_sentiment(db, joke_id, user_choice, user_identifier)
|
||||||
print(f"✅ Your rating ({'👍 Up' if user_choice == 'up' else '👎 Down' if user_choice == 'down' else '😐 Neutral'}) recorded!")
|
if success:
|
||||||
|
sentiment_text = '👍 Up' if user_choice == 'up' else '👎 Down' if user_choice == 'down' else '😐 Neutral'
|
||||||
|
print(f"✅ Your rating ({sentiment_text}) has been {action}!")
|
||||||
else:
|
else:
|
||||||
print("❌ Could not save your rating.")
|
print("❌ Could not save your rating.")
|
||||||
else:
|
else:
|
||||||
print("篓 No approved jokes in the database yet!")
|
print("❌ No approved jokes in the database yet!")
|
||||||
|
|
||||||
elif choice == "2":
|
elif choice == "2":
|
||||||
new_joke = input("Enter your joke: ").strip()
|
new_joke = input("Enter your joke: ").strip()
|
||||||
@@ -111,11 +288,69 @@ def main():
|
|||||||
print(f"❌ Error saving joke: {e}")
|
print(f"❌ Error saving joke: {e}")
|
||||||
|
|
||||||
elif choice == "3":
|
elif choice == "3":
|
||||||
print("\n👋 Goodbye!")
|
# View user's rating history
|
||||||
|
user_history = get_user_sentiment_history(db, user_identifier)
|
||||||
|
if user_history:
|
||||||
|
print(f"\n📊 Your Joke Ratings History ({len(user_history)} ratings):")
|
||||||
|
print("-" * 50)
|
||||||
|
for joke_id, joke_text, sentiment, created_at in user_history[:10]: # Show last 10
|
||||||
|
sentiment_icon = {'up': '👍', 'down': '👎', 'neutral': '😐'}[sentiment]
|
||||||
|
print(f"{sentiment_icon} {joke_text[:50]}{'...' if len(joke_text) > 50 else ''}")
|
||||||
|
print(f" Rated: {created_at}")
|
||||||
|
print()
|
||||||
|
else:
|
||||||
|
print(f"📝 {user_identifier}, you haven't rated any jokes yet!")
|
||||||
|
|
||||||
|
elif choice == "4":
|
||||||
|
# See what jokes the user liked
|
||||||
|
liked_jokes = get_popular_jokes_by_user(db, user_identifier)
|
||||||
|
if liked_jokes:
|
||||||
|
print(f"\n😄 Jokes You Liked ({len(liked_jokes)} favorites):")
|
||||||
|
print("-" * 50)
|
||||||
|
for joke_id, joke_text, contributor, sentiment, created_at in liked_jokes:
|
||||||
|
print(f"👍 {joke_text}")
|
||||||
|
print(f" 👤 By: {contributor} | ⏰ {created_at}")
|
||||||
|
print()
|
||||||
|
else:
|
||||||
|
print(f"😢 {user_identifier}, you haven't liked any jokes yet!")
|
||||||
|
|
||||||
|
elif choice == "5":
|
||||||
|
# View community analytics
|
||||||
|
print("\n📈 COMMUNITY ANALYTICS")
|
||||||
|
print("=" * 30)
|
||||||
|
|
||||||
|
# Top users by positivity
|
||||||
|
top_users = get_top_users_by_joke_preference(db)
|
||||||
|
if top_users:
|
||||||
|
print("\n🏆 Most Positive Users:")
|
||||||
|
print("-" * 25)
|
||||||
|
for i, (username, total, positive, negative, positivity_pct) in enumerate(top_users[:5], 1):
|
||||||
|
print(f"{i}. {username}: {positivity_pct}% positive ({positive}/{total} ratings)")
|
||||||
|
|
||||||
|
# Overall joke statistics
|
||||||
|
cursor = db.execute('''
|
||||||
|
SELECT
|
||||||
|
COUNT(DISTINCT joke_id) as total_rated_jokes,
|
||||||
|
COUNT(*) as total_ratings,
|
||||||
|
AVG(CASE WHEN user_sentiment = 'up' THEN 1
|
||||||
|
WHEN user_sentiment = 'down' THEN 0
|
||||||
|
ELSE 0.5 END) * 100 as overall_positivity
|
||||||
|
FROM user_sentiments
|
||||||
|
''')
|
||||||
|
stats = cursor.fetchone()
|
||||||
|
if stats and stats[0] > 0:
|
||||||
|
total_jokes, total_ratings, positivity = stats
|
||||||
|
print(f"\n📊 Overall Statistics:")
|
||||||
|
print(f" Total jokes rated: {total_jokes}")
|
||||||
|
print(f" Total ratings: {total_ratings}")
|
||||||
|
print(f" Community positivity: {positivity:.1f}%")
|
||||||
|
|
||||||
|
elif choice == "6":
|
||||||
|
print(f"\n👋 Goodbye, {user_identifier}!")
|
||||||
break
|
break
|
||||||
|
|
||||||
else:
|
else:
|
||||||
print("❌ Invalid choice. Please select 1-3.")
|
print("❌ Invalid choice. Please select 1-6.")
|
||||||
|
|
||||||
db.close()
|
db.close()
|
||||||
|
|
||||||
|
|||||||
@@ -1,113 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script to populate the jokes database with sample data.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sqlite3
|
|
||||||
import os
|
|
||||||
|
|
||||||
def populate_database():
|
|
||||||
# Connect to the database
|
|
||||||
db_path = 'jokes.db'
|
|
||||||
if not os.path.exists(db_path):
|
|
||||||
print(f"❌ Database {db_path} does not exist!")
|
|
||||||
print("Please run database.py first to create the database.")
|
|
||||||
return False
|
|
||||||
|
|
||||||
conn = sqlite3.connect(db_path)
|
|
||||||
cursor = conn.cursor()
|
|
||||||
|
|
||||||
# Check if jokes table is empty before adding data
|
|
||||||
try:
|
|
||||||
cursor.execute('SELECT COUNT(*) FROM jokes')
|
|
||||||
count = cursor.fetchone()[0]
|
|
||||||
except sqlite3.OperationalError:
|
|
||||||
print("❌ The jokes table does not exist. Please run database.py first.")
|
|
||||||
conn.close()
|
|
||||||
return False
|
|
||||||
|
|
||||||
if count > 0:
|
|
||||||
overwrite = input(f"⚠️ Database already contains {count} jokes. Overwrite? (y/N): ")
|
|
||||||
if overwrite.lower() != 'y':
|
|
||||||
print("❌ Operation cancelled.")
|
|
||||||
conn.close()
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Clear existing data first
|
|
||||||
# Only try to delete from user_sentiments if the table exists
|
|
||||||
try:
|
|
||||||
cursor.execute('DELETE FROM user_sentiments')
|
|
||||||
except sqlite3.OperationalError:
|
|
||||||
# user_sentiments table doesn't exist, which is OK
|
|
||||||
print("ℹ️ user_sentiments table does not exist yet.")
|
|
||||||
pass
|
|
||||||
cursor.execute('DELETE FROM jokes')
|
|
||||||
|
|
||||||
# Use the clean SQL file without SELECT statements
|
|
||||||
sql_file = 'clean_sample_data.sql'
|
|
||||||
if not os.path.exists(sql_file):
|
|
||||||
print(f"❌ SQL file {sql_file} does not exist!")
|
|
||||||
return False
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Execute the clean SQL file directly
|
|
||||||
with open(sql_file, 'r', encoding='utf-8') as f:
|
|
||||||
sql_commands = f.read()
|
|
||||||
|
|
||||||
cursor.executescript(sql_commands)
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
print(f"✅ Successfully populated the jokes table with sample jokes!")
|
|
||||||
|
|
||||||
# Count the total number of jokes
|
|
||||||
cursor.execute('SELECT COUNT(*) FROM jokes')
|
|
||||||
count = cursor.fetchone()[0]
|
|
||||||
print(f"📊 Total jokes in database: {count}")
|
|
||||||
|
|
||||||
# Add some sample user sentiments for the first few jokes
|
|
||||||
print("🎯 Adding sample user sentiments...")
|
|
||||||
jokes_with_sentiment = [(1, 'up'), (1, 'up'), (1, 'neutral'),
|
|
||||||
(2, 'down'), (2, 'up'),
|
|
||||||
(3, 'up'), (3, 'up'), (3, 'up')]
|
|
||||||
|
|
||||||
for joke_id, sentiment in jokes_with_sentiment:
|
|
||||||
try:
|
|
||||||
cursor.execute('''
|
|
||||||
INSERT INTO user_sentiments (joke_id, user_sentiment)
|
|
||||||
VALUES (?, ?)
|
|
||||||
''', (joke_id, sentiment))
|
|
||||||
except sqlite3.OperationalError:
|
|
||||||
# If user_sentiments table doesn't exist, skip adding sentiments
|
|
||||||
print("ℹ️ Skipping user sentiments as the table doesn't exist yet.")
|
|
||||||
break
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
print(f"✅ Added sample user sentiments for {len(jokes_with_sentiment)} joke entries")
|
|
||||||
|
|
||||||
# Show user sentiment distribution if the table exists
|
|
||||||
try:
|
|
||||||
cursor.execute('SELECT user_sentiment, COUNT(*) FROM user_sentiments GROUP BY user_sentiment')
|
|
||||||
user_distribution = cursor.fetchall()
|
|
||||||
if user_distribution:
|
|
||||||
print(f'\n👥 User Sentiment distribution:')
|
|
||||||
for sentiment, cnt in user_distribution:
|
|
||||||
emoji = {'up': '👍', 'down': '👎', 'neutral': '😐'}[sentiment]
|
|
||||||
print(f' {emoji} {sentiment.capitalize()}: {cnt} ratings')
|
|
||||||
except sqlite3.OperationalError:
|
|
||||||
print("\nℹ️ User sentiment table not available yet.")
|
|
||||||
|
|
||||||
conn.close()
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Error executing SQL commands: {e}")
|
|
||||||
conn.rollback()
|
|
||||||
conn.close()
|
|
||||||
return False
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
print("🔄 Populating jokes database with sample data...")
|
|
||||||
success = populate_database()
|
|
||||||
if success:
|
|
||||||
print("\n🎉 Database successfully populated! You can now run jokes.py to enjoy the jokes.")
|
|
||||||
else:
|
|
||||||
print("\n💥 Failed to populate the database.")
|
|
||||||
@@ -1,37 +0,0 @@
|
|||||||
-- Clear existing data first (optional - uncomment if needed)
|
|
||||||
-- DELETE FROM jokes;
|
|
||||||
|
|
||||||
-- Insert 20 dummy jokes with various sentiments
|
|
||||||
INSERT INTO jokes (joke, contributor, published, sentiment_score, sentiment_label) VALUES
|
|
||||||
('Why don''t scientists trust atoms? Because they make up everything!', 'ScienceFan', '2024-01-15 10:30:00', 0.75, '😊 Positive'),
|
|
||||||
('I told my wife she was drawing her eyebrows too high. She looked surprised.', 'Joker123', '2024-01-16 14:20:00', 0.35, '😊 Positive'),
|
|
||||||
('Why did the scarecrow win an award? He was outstanding in his field!', 'FarmLife', '2024-01-17 09:15:00', 0.65, '😊 Positive'),
|
|
||||||
('What do you call a fish with no eyes? Fsh!', 'MarineBio', '2024-01-18 16:45:00', 0.25, '😊 Positive'),
|
|
||||||
('I''m reading a book on anti-gravity. It''s impossible to put down!', 'PhysicsNerd', '2024-01-19 11:30:00', 0.45, '😊 Positive'),
|
|
||||||
('Why did the computer go to the doctor? Because it had a virus.', 'TechSupport', '2024-01-20 13:10:00', 0.05, '😐 Neutral'),
|
|
||||||
('What do you call a bear with no teeth? A gummy bear.', 'WildlifeFan', '2024-01-21 15:25:00', 0.08, '😐 Neutral'),
|
|
||||||
('Why did the bicycle fall over? Because it was two-tired.', 'Cyclist', '2024-01-22 10:00:00', -0.02, '😐 Neutral'),
|
|
||||||
('What do you call a sleeping bull? A bulldozer.', 'Cowboy', '2024-01-23 14:35:00', 0.03, '😐 Neutral'),
|
|
||||||
('Why did the math book look so sad? Because it had too many problems.', 'Student', '2024-01-24 09:50:00', -0.05, '😐 Neutral'),
|
|
||||||
('I used to play piano by ear, but now I use my hands.', 'Musician', '2024-01-25 12:15:00', -0.15, '😒 Negative'),
|
|
||||||
('I told my computer I needed a break, and now it won''t stop sending me Kit-Kat ads.', 'OfficeWorker', '2024-01-26 16:30:00', -0.25, '😒 Negative'),
|
|
||||||
('Parallel lines have so much in common. It''s a shame they''ll never meet.', 'MathTeacher', '2024-01-27 11:40:00', -0.35, '😒 Negative'),
|
|
||||||
('My wife told me to stop impersonating a flamingo. I had to put my foot down.', 'Husband', '2024-01-28 14:55:00', -0.20, '😒 Negative'),
|
|
||||||
('I told my girlfriend she drew her eyebrows too high. She seemed surprised.', 'Boyfriend', '2024-01-29 10:10:00', -0.30, '😒 Negative'),
|
|
||||||
('What''s orange and sounds like a parrot? A carrot!', 'Vegetarian', '2024-01-30 13:20:00', 0.85, '😊 Positive'),
|
|
||||||
('Why don''t eggs tell jokes? They''d crack each other up!', 'Chef', '2024-01-31 15:45:00', 0.90, '😊 Positive'),
|
|
||||||
('I invented a new word: Plagiarism!', 'Writer', '2024-02-01 09:30:00', 0.78, '😊 Positive'),
|
|
||||||
('Why did the golfer bring two pairs of pants? In case he got a hole in one!', 'Golfer', '2024-02-02 12:15:00', 0.82, '😊 Positive'),
|
|
||||||
('What do you call a fake noodle? An impasta!', 'ItalianFood', '2024-02-03 14:40:00', 0.88, '😊 Positive');
|
|
||||||
|
|
||||||
-- Show total count
|
|
||||||
SELECT '✅ Inserted ' || COUNT(*) || ' jokes!' as message FROM jokes;
|
|
||||||
|
|
||||||
-- Show distribution by sentiment
|
|
||||||
SELECT
|
|
||||||
sentiment_label,
|
|
||||||
COUNT(*) as count,
|
|
||||||
'📊' as chart
|
|
||||||
FROM jokes
|
|
||||||
GROUP BY sentiment_label
|
|
||||||
ORDER BY count DESC;
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Setup script to properly initialize the database following the project specification.
|
|
||||||
This ensures the database has the correct schema and sample data.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sqlite3
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
|
|
||||||
def setup_database():
|
|
||||||
db_path = 'jokes.db'
|
|
||||||
|
|
||||||
print("🚀 Starting database setup process...")
|
|
||||||
|
|
||||||
# Step 1: Remove existing database file if it exists
|
|
||||||
if os.path.exists(db_path):
|
|
||||||
print(f"🗑️ Removing existing database: {db_path}")
|
|
||||||
os.remove(db_path)
|
|
||||||
print("✅ Old database removed")
|
|
||||||
else:
|
|
||||||
print("📋 No existing database to remove")
|
|
||||||
|
|
||||||
# Step 2: Create database with correct schema
|
|
||||||
print("\n🔧 Creating database with correct schema...")
|
|
||||||
result = subprocess.run([sys.executable, 'database.py'], capture_output=True, text=True)
|
|
||||||
if result.returncode != 0:
|
|
||||||
print(f"❌ Error creating database: {result.stderr}")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("✅ Database schema created successfully")
|
|
||||||
|
|
||||||
# Step 3: Populate database with sample data
|
|
||||||
print("\n📚 Populating database with sample data...")
|
|
||||||
result = subprocess.run([sys.executable, 'populate_db.py'], capture_output=True, text=True)
|
|
||||||
if result.returncode != 0:
|
|
||||||
print(f"❌ Error populating database: {result.stderr}")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("✅ Database populated with sample data")
|
|
||||||
|
|
||||||
# Step 4: Verify the database
|
|
||||||
print("\n🔍 Verifying database setup...")
|
|
||||||
result = subprocess.run([sys.executable, 'check_db.py'], capture_output=True, text=True)
|
|
||||||
if result.returncode != 0:
|
|
||||||
print(f"❌ Error verifying database: {result.stderr}")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("✅ Database verified successfully")
|
|
||||||
print(result.stdout)
|
|
||||||
|
|
||||||
print("\n🎉 Database setup completed successfully!")
|
|
||||||
print("You can now run 'python jokes.py' to start the Joke Bot.")
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
success = setup_database()
|
|
||||||
if not success:
|
|
||||||
print("\n💥 Database setup failed. Please check the errors above.")
|
|
||||||
sys.exit(1)
|
|
||||||
37
jokes_bot/v4.0/test_basic.py
Normal file
37
jokes_bot/v4.0/test_basic.py
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Simple test to verify basic functionality"""
|
||||||
|
|
||||||
|
import sqlite3
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Clean up any existing database
|
||||||
|
if os.path.exists('jokes.db'):
|
||||||
|
os.remove('jokes.db')
|
||||||
|
print("Cleaned existing database")
|
||||||
|
|
||||||
|
# Test importing and initializing
|
||||||
|
try:
|
||||||
|
from jokes import initialize_database
|
||||||
|
print("✓ Successfully imported jokes module")
|
||||||
|
|
||||||
|
initialize_database()
|
||||||
|
print("✓ Database initialized successfully")
|
||||||
|
|
||||||
|
# Test database connection
|
||||||
|
db = sqlite3.connect('jokes.db')
|
||||||
|
cursor = db.execute("SELECT COUNT(*) FROM jokes")
|
||||||
|
count = cursor.fetchone()[0]
|
||||||
|
print(f"✓ Found {count} jokes in database")
|
||||||
|
|
||||||
|
# Test user_sentiments table structure
|
||||||
|
cursor = db.execute("PRAGMA table_info(user_sentiments)")
|
||||||
|
columns = cursor.fetchall()
|
||||||
|
print("✓ user_sentiments table columns:")
|
||||||
|
for col in columns:
|
||||||
|
print(f" - {col[1]} ({col[2]})")
|
||||||
|
|
||||||
|
db.close()
|
||||||
|
print("✓ All tests passed!")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"✗ Error: {e}")
|
||||||
@@ -1,51 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script to upgrade an existing database with the new user_sentiments table.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sqlite3
|
|
||||||
import os
|
|
||||||
|
|
||||||
def upgrade_database():
|
|
||||||
db_path = 'jokes.db'
|
|
||||||
|
|
||||||
if not os.path.exists(db_path):
|
|
||||||
print(f"❌ Database {db_path} does not exist!")
|
|
||||||
print("Please run database.py first to create the database.")
|
|
||||||
return False
|
|
||||||
|
|
||||||
conn = sqlite3.connect(db_path)
|
|
||||||
cursor = conn.cursor()
|
|
||||||
|
|
||||||
# Check if user_sentiments table already exists
|
|
||||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='user_sentiments';")
|
|
||||||
table_exists = cursor.fetchone()
|
|
||||||
|
|
||||||
if table_exists:
|
|
||||||
print("✅ Database is already up to date!")
|
|
||||||
conn.close()
|
|
||||||
return True
|
|
||||||
|
|
||||||
print("🔄 Upgrading database schema...")
|
|
||||||
|
|
||||||
# Create the user_sentiments table
|
|
||||||
cursor.execute('''CREATE TABLE user_sentiments (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
joke_id INTEGER NOT NULL,
|
|
||||||
user_sentiment TEXT CHECK(user_sentiment IN ('up', 'down', 'neutral')) DEFAULT 'neutral',
|
|
||||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
|
||||||
FOREIGN KEY (joke_id) REFERENCES jokes(id) ON DELETE CASCADE
|
|
||||||
)''')
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
print("✅ Database upgraded successfully! Added user_sentiments table.")
|
|
||||||
conn.close()
|
|
||||||
return True
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
print("🔄 Checking database schema...")
|
|
||||||
success = upgrade_database()
|
|
||||||
if success:
|
|
||||||
print("\n🎉 Database is ready for the enhanced application!")
|
|
||||||
else:
|
|
||||||
print("\n💥 Failed to upgrade the database.")
|
|
||||||
@@ -1,241 +0,0 @@
|
|||||||
<#
|
|
||||||
.Synopsis
|
|
||||||
Activate a Python virtual environment for the current PowerShell session.
|
|
||||||
|
|
||||||
.Description
|
|
||||||
Pushes the python executable for a virtual environment to the front of the
|
|
||||||
$Env:PATH environment variable and sets the prompt to signify that you are
|
|
||||||
in a Python virtual environment. Makes use of the command line switches as
|
|
||||||
well as the `pyvenv.cfg` file values present in the virtual environment.
|
|
||||||
|
|
||||||
.Parameter VenvDir
|
|
||||||
Path to the directory that contains the virtual environment to activate. The
|
|
||||||
default value for this is the parent of the directory that the Activate.ps1
|
|
||||||
script is located within.
|
|
||||||
|
|
||||||
.Parameter Prompt
|
|
||||||
The prompt prefix to display when this virtual environment is activated. By
|
|
||||||
default, this prompt is the name of the virtual environment folder (VenvDir)
|
|
||||||
surrounded by parentheses and followed by a single space (ie. '(.venv) ').
|
|
||||||
|
|
||||||
.Example
|
|
||||||
Activate.ps1
|
|
||||||
Activates the Python virtual environment that contains the Activate.ps1 script.
|
|
||||||
|
|
||||||
.Example
|
|
||||||
Activate.ps1 -Verbose
|
|
||||||
Activates the Python virtual environment that contains the Activate.ps1 script,
|
|
||||||
and shows extra information about the activation as it executes.
|
|
||||||
|
|
||||||
.Example
|
|
||||||
Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv
|
|
||||||
Activates the Python virtual environment located in the specified location.
|
|
||||||
|
|
||||||
.Example
|
|
||||||
Activate.ps1 -Prompt "MyPython"
|
|
||||||
Activates the Python virtual environment that contains the Activate.ps1 script,
|
|
||||||
and prefixes the current prompt with the specified string (surrounded in
|
|
||||||
parentheses) while the virtual environment is active.
|
|
||||||
|
|
||||||
.Notes
|
|
||||||
On Windows, it may be required to enable this Activate.ps1 script by setting the
|
|
||||||
execution policy for the user. You can do this by issuing the following PowerShell
|
|
||||||
command:
|
|
||||||
|
|
||||||
PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
|
|
||||||
|
|
||||||
For more information on Execution Policies:
|
|
||||||
https://go.microsoft.com/fwlink/?LinkID=135170
|
|
||||||
|
|
||||||
#>
|
|
||||||
Param(
|
|
||||||
[Parameter(Mandatory = $false)]
|
|
||||||
[String]
|
|
||||||
$VenvDir,
|
|
||||||
[Parameter(Mandatory = $false)]
|
|
||||||
[String]
|
|
||||||
$Prompt
|
|
||||||
)
|
|
||||||
|
|
||||||
<# Function declarations --------------------------------------------------- #>
|
|
||||||
|
|
||||||
<#
|
|
||||||
.Synopsis
|
|
||||||
Remove all shell session elements added by the Activate script, including the
|
|
||||||
addition of the virtual environment's Python executable from the beginning of
|
|
||||||
the PATH variable.
|
|
||||||
|
|
||||||
.Parameter NonDestructive
|
|
||||||
If present, do not remove this function from the global namespace for the
|
|
||||||
session.
|
|
||||||
|
|
||||||
#>
|
|
||||||
function global:deactivate ([switch]$NonDestructive) {
|
|
||||||
# Revert to original values
|
|
||||||
|
|
||||||
# The prior prompt:
|
|
||||||
if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) {
|
|
||||||
Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt
|
|
||||||
Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT
|
|
||||||
}
|
|
||||||
|
|
||||||
# The prior PYTHONHOME:
|
|
||||||
if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) {
|
|
||||||
Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME
|
|
||||||
Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME
|
|
||||||
}
|
|
||||||
|
|
||||||
# The prior PATH:
|
|
||||||
if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) {
|
|
||||||
Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH
|
|
||||||
Remove-Item -Path Env:_OLD_VIRTUAL_PATH
|
|
||||||
}
|
|
||||||
|
|
||||||
# Just remove the VIRTUAL_ENV altogether:
|
|
||||||
if (Test-Path -Path Env:VIRTUAL_ENV) {
|
|
||||||
Remove-Item -Path env:VIRTUAL_ENV
|
|
||||||
}
|
|
||||||
|
|
||||||
# Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether:
|
|
||||||
if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) {
|
|
||||||
Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force
|
|
||||||
}
|
|
||||||
|
|
||||||
# Leave deactivate function in the global namespace if requested:
|
|
||||||
if (-not $NonDestructive) {
|
|
||||||
Remove-Item -Path function:deactivate
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
<#
|
|
||||||
.Description
|
|
||||||
Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the
|
|
||||||
given folder, and returns them in a map.
|
|
||||||
|
|
||||||
For each line in the pyvenv.cfg file, if that line can be parsed into exactly
|
|
||||||
two strings separated by `=` (with any amount of whitespace surrounding the =)
|
|
||||||
then it is considered a `key = value` line. The left hand string is the key,
|
|
||||||
the right hand is the value.
|
|
||||||
|
|
||||||
If the value starts with a `'` or a `"` then the first and last character is
|
|
||||||
stripped from the value before being captured.
|
|
||||||
|
|
||||||
.Parameter ConfigDir
|
|
||||||
Path to the directory that contains the `pyvenv.cfg` file.
|
|
||||||
#>
|
|
||||||
function Get-PyVenvConfig(
|
|
||||||
[String]
|
|
||||||
$ConfigDir
|
|
||||||
) {
|
|
||||||
Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg"
|
|
||||||
|
|
||||||
# Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue).
|
|
||||||
$pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue
|
|
||||||
|
|
||||||
# An empty map will be returned if no config file is found.
|
|
||||||
$pyvenvConfig = @{ }
|
|
||||||
|
|
||||||
if ($pyvenvConfigPath) {
|
|
||||||
|
|
||||||
Write-Verbose "File exists, parse `key = value` lines"
|
|
||||||
$pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath
|
|
||||||
|
|
||||||
$pyvenvConfigContent | ForEach-Object {
|
|
||||||
$keyval = $PSItem -split "\s*=\s*", 2
|
|
||||||
if ($keyval[0] -and $keyval[1]) {
|
|
||||||
$val = $keyval[1]
|
|
||||||
|
|
||||||
# Remove extraneous quotations around a string value.
|
|
||||||
if ("'""".Contains($val.Substring(0, 1))) {
|
|
||||||
$val = $val.Substring(1, $val.Length - 2)
|
|
||||||
}
|
|
||||||
|
|
||||||
$pyvenvConfig[$keyval[0]] = $val
|
|
||||||
Write-Verbose "Adding Key: '$($keyval[0])'='$val'"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return $pyvenvConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
<# Begin Activate script --------------------------------------------------- #>
|
|
||||||
|
|
||||||
# Determine the containing directory of this script
|
|
||||||
$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition
|
|
||||||
$VenvExecDir = Get-Item -Path $VenvExecPath
|
|
||||||
|
|
||||||
Write-Verbose "Activation script is located in path: '$VenvExecPath'"
|
|
||||||
Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)"
|
|
||||||
Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)"
|
|
||||||
|
|
||||||
# Set values required in priority: CmdLine, ConfigFile, Default
|
|
||||||
# First, get the location of the virtual environment, it might not be
|
|
||||||
# VenvExecDir if specified on the command line.
|
|
||||||
if ($VenvDir) {
|
|
||||||
Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values"
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir."
|
|
||||||
$VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/")
|
|
||||||
Write-Verbose "VenvDir=$VenvDir"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Next, read the `pyvenv.cfg` file to determine any required value such
|
|
||||||
# as `prompt`.
|
|
||||||
$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir
|
|
||||||
|
|
||||||
# Next, set the prompt from the command line, or the config file, or
|
|
||||||
# just use the name of the virtual environment folder.
|
|
||||||
if ($Prompt) {
|
|
||||||
Write-Verbose "Prompt specified as argument, using '$Prompt'"
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value"
|
|
||||||
if ($pyvenvCfg -and $pyvenvCfg['prompt']) {
|
|
||||||
Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'"
|
|
||||||
$Prompt = $pyvenvCfg['prompt'];
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)"
|
|
||||||
Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'"
|
|
||||||
$Prompt = Split-Path -Path $venvDir -Leaf
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Write-Verbose "Prompt = '$Prompt'"
|
|
||||||
Write-Verbose "VenvDir='$VenvDir'"
|
|
||||||
|
|
||||||
# Deactivate any currently active virtual environment, but leave the
|
|
||||||
# deactivate function in place.
|
|
||||||
deactivate -nondestructive
|
|
||||||
|
|
||||||
# Now set the environment variable VIRTUAL_ENV, used by many tools to determine
|
|
||||||
# that there is an activated venv.
|
|
||||||
$env:VIRTUAL_ENV = $VenvDir
|
|
||||||
|
|
||||||
if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) {
|
|
||||||
|
|
||||||
Write-Verbose "Setting prompt to '$Prompt'"
|
|
||||||
|
|
||||||
# Set the prompt to include the env name
|
|
||||||
# Make sure _OLD_VIRTUAL_PROMPT is global
|
|
||||||
function global:_OLD_VIRTUAL_PROMPT { "" }
|
|
||||||
Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT
|
|
||||||
New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt
|
|
||||||
|
|
||||||
function global:prompt {
|
|
||||||
Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) "
|
|
||||||
_OLD_VIRTUAL_PROMPT
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Clear PYTHONHOME
|
|
||||||
if (Test-Path -Path Env:PYTHONHOME) {
|
|
||||||
Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME
|
|
||||||
Remove-Item -Path Env:PYTHONHOME
|
|
||||||
}
|
|
||||||
|
|
||||||
# Add the venv to the PATH
|
|
||||||
Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH
|
|
||||||
$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH"
|
|
||||||
@@ -1,66 +0,0 @@
|
|||||||
# This file must be used with "source bin/activate" *from bash*
|
|
||||||
# you cannot run it directly
|
|
||||||
|
|
||||||
deactivate () {
|
|
||||||
# reset old environment variables
|
|
||||||
if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then
|
|
||||||
PATH="${_OLD_VIRTUAL_PATH:-}"
|
|
||||||
export PATH
|
|
||||||
unset _OLD_VIRTUAL_PATH
|
|
||||||
fi
|
|
||||||
if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then
|
|
||||||
PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}"
|
|
||||||
export PYTHONHOME
|
|
||||||
unset _OLD_VIRTUAL_PYTHONHOME
|
|
||||||
fi
|
|
||||||
|
|
||||||
# This should detect bash and zsh, which have a hash command that must
|
|
||||||
# be called to get it to forget past commands. Without forgetting
|
|
||||||
# past commands the $PATH changes we made may not be respected
|
|
||||||
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
|
|
||||||
hash -r 2> /dev/null
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then
|
|
||||||
PS1="${_OLD_VIRTUAL_PS1:-}"
|
|
||||||
export PS1
|
|
||||||
unset _OLD_VIRTUAL_PS1
|
|
||||||
fi
|
|
||||||
|
|
||||||
unset VIRTUAL_ENV
|
|
||||||
if [ ! "${1:-}" = "nondestructive" ] ; then
|
|
||||||
# Self destruct!
|
|
||||||
unset -f deactivate
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# unset irrelevant variables
|
|
||||||
deactivate nondestructive
|
|
||||||
|
|
||||||
VIRTUAL_ENV="/Users/home/YandexDisk/TECHNOLYCEUM/ict/Year/2025/ai/ai6/ai6-m3/ai6-m3/jokes-bot-v3.0/venv"
|
|
||||||
export VIRTUAL_ENV
|
|
||||||
|
|
||||||
_OLD_VIRTUAL_PATH="$PATH"
|
|
||||||
PATH="$VIRTUAL_ENV/bin:$PATH"
|
|
||||||
export PATH
|
|
||||||
|
|
||||||
# unset PYTHONHOME if set
|
|
||||||
# this will fail if PYTHONHOME is set to the empty string (which is bad anyway)
|
|
||||||
# could use `if (set -u; : $PYTHONHOME) ;` in bash
|
|
||||||
if [ -n "${PYTHONHOME:-}" ] ; then
|
|
||||||
_OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}"
|
|
||||||
unset PYTHONHOME
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then
|
|
||||||
_OLD_VIRTUAL_PS1="${PS1:-}"
|
|
||||||
PS1="(venv) ${PS1:-}"
|
|
||||||
export PS1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# This should detect bash and zsh, which have a hash command that must
|
|
||||||
# be called to get it to forget past commands. Without forgetting
|
|
||||||
# past commands the $PATH changes we made may not be respected
|
|
||||||
if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then
|
|
||||||
hash -r 2> /dev/null
|
|
||||||
fi
|
|
||||||
@@ -1,25 +0,0 @@
|
|||||||
# This file must be used with "source bin/activate.csh" *from csh*.
|
|
||||||
# You cannot run it directly.
|
|
||||||
# Created by Davide Di Blasi <davidedb@gmail.com>.
|
|
||||||
# Ported to Python 3.3 venv by Andrew Svetlov <andrew.svetlov@gmail.com>
|
|
||||||
|
|
||||||
alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; test "\!:*" != "nondestructive" && unalias deactivate'
|
|
||||||
|
|
||||||
# Unset irrelevant variables.
|
|
||||||
deactivate nondestructive
|
|
||||||
|
|
||||||
setenv VIRTUAL_ENV "/Users/home/YandexDisk/TECHNOLYCEUM/ict/Year/2025/ai/ai6/ai6-m3/ai6-m3/jokes-bot-v3.0/venv"
|
|
||||||
|
|
||||||
set _OLD_VIRTUAL_PATH="$PATH"
|
|
||||||
setenv PATH "$VIRTUAL_ENV/bin:$PATH"
|
|
||||||
|
|
||||||
|
|
||||||
set _OLD_VIRTUAL_PROMPT="$prompt"
|
|
||||||
|
|
||||||
if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then
|
|
||||||
set prompt = "(venv) $prompt"
|
|
||||||
endif
|
|
||||||
|
|
||||||
alias pydoc python -m pydoc
|
|
||||||
|
|
||||||
rehash
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
# This file must be used with "source <venv>/bin/activate.fish" *from fish*
|
|
||||||
# (https://fishshell.com/); you cannot run it directly.
|
|
||||||
|
|
||||||
function deactivate -d "Exit virtual environment and return to normal shell environment"
|
|
||||||
# reset old environment variables
|
|
||||||
if test -n "$_OLD_VIRTUAL_PATH"
|
|
||||||
set -gx PATH $_OLD_VIRTUAL_PATH
|
|
||||||
set -e _OLD_VIRTUAL_PATH
|
|
||||||
end
|
|
||||||
if test -n "$_OLD_VIRTUAL_PYTHONHOME"
|
|
||||||
set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME
|
|
||||||
set -e _OLD_VIRTUAL_PYTHONHOME
|
|
||||||
end
|
|
||||||
|
|
||||||
if test -n "$_OLD_FISH_PROMPT_OVERRIDE"
|
|
||||||
functions -e fish_prompt
|
|
||||||
set -e _OLD_FISH_PROMPT_OVERRIDE
|
|
||||||
functions -c _old_fish_prompt fish_prompt
|
|
||||||
functions -e _old_fish_prompt
|
|
||||||
end
|
|
||||||
|
|
||||||
set -e VIRTUAL_ENV
|
|
||||||
if test "$argv[1]" != "nondestructive"
|
|
||||||
# Self-destruct!
|
|
||||||
functions -e deactivate
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Unset irrelevant variables.
|
|
||||||
deactivate nondestructive
|
|
||||||
|
|
||||||
set -gx VIRTUAL_ENV "/Users/home/YandexDisk/TECHNOLYCEUM/ict/Year/2025/ai/ai6/ai6-m3/ai6-m3/jokes-bot-v3.0/venv"
|
|
||||||
|
|
||||||
set -gx _OLD_VIRTUAL_PATH $PATH
|
|
||||||
set -gx PATH "$VIRTUAL_ENV/bin" $PATH
|
|
||||||
|
|
||||||
# Unset PYTHONHOME if set.
|
|
||||||
if set -q PYTHONHOME
|
|
||||||
set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME
|
|
||||||
set -e PYTHONHOME
|
|
||||||
end
|
|
||||||
|
|
||||||
if test -z "$VIRTUAL_ENV_DISABLE_PROMPT"
|
|
||||||
# fish uses a function instead of an env var to generate the prompt.
|
|
||||||
|
|
||||||
# Save the current fish_prompt function as the function _old_fish_prompt.
|
|
||||||
functions -c fish_prompt _old_fish_prompt
|
|
||||||
|
|
||||||
# With the original prompt function renamed, we can override with our own.
|
|
||||||
function fish_prompt
|
|
||||||
# Save the return status of the last command.
|
|
||||||
set -l old_status $status
|
|
||||||
|
|
||||||
# Output the venv prompt; color taken from the blue of the Python logo.
|
|
||||||
printf "%s%s%s" (set_color 4B8BBE) "(venv) " (set_color normal)
|
|
||||||
|
|
||||||
# Restore the return status of the previous command.
|
|
||||||
echo "exit $old_status" | .
|
|
||||||
# Output the original/"old" prompt.
|
|
||||||
_old_fish_prompt
|
|
||||||
end
|
|
||||||
|
|
||||||
set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV"
|
|
||||||
end
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
#!/Users/home/YandexDisk/TECHNOLYCEUM/ict/Year/2025/ai/ai6/ai6-m3/ai6-m3/jokes-bot-v3.0/venv/bin/python3.9
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
from httpx import main
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
#!/Users/home/YandexDisk/TECHNOLYCEUM/ict/Year/2025/ai/ai6/ai6-m3/ai6-m3/jokes-bot-v3.0/venv/bin/python3.9
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
from pip._internal.cli.main import main
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
#!/Users/home/YandexDisk/TECHNOLYCEUM/ict/Year/2025/ai/ai6/ai6-m3/ai6-m3/jokes-bot-v3.0/venv/bin/python3.9
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
from pip._internal.cli.main import main
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,8 +0,0 @@
|
|||||||
#!/Users/home/YandexDisk/TECHNOLYCEUM/ict/Year/2025/ai/ai6/ai6-m3/ai6-m3/jokes-bot-v3.0/venv/bin/python3.9
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
from pip._internal.cli.main import main
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
python3.9
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
python3.9
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
/usr/local/opt/python@3.9/bin/python3.9
|
|
||||||
Binary file not shown.
@@ -1,227 +0,0 @@
|
|||||||
# don't import any costly modules
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
is_pypy = '__pypy__' in sys.builtin_module_names
|
|
||||||
|
|
||||||
|
|
||||||
def warn_distutils_present():
|
|
||||||
if 'distutils' not in sys.modules:
|
|
||||||
return
|
|
||||||
if is_pypy and sys.version_info < (3, 7):
|
|
||||||
# PyPy for 3.6 unconditionally imports distutils, so bypass the warning
|
|
||||||
# https://foss.heptapod.net/pypy/pypy/-/blob/be829135bc0d758997b3566062999ee8b23872b4/lib-python/3/site.py#L250
|
|
||||||
return
|
|
||||||
import warnings
|
|
||||||
|
|
||||||
warnings.warn(
|
|
||||||
"Distutils was imported before Setuptools, but importing Setuptools "
|
|
||||||
"also replaces the `distutils` module in `sys.modules`. This may lead "
|
|
||||||
"to undesirable behaviors or errors. To avoid these issues, avoid "
|
|
||||||
"using distutils directly, ensure that setuptools is installed in the "
|
|
||||||
"traditional way (e.g. not an editable install), and/or make sure "
|
|
||||||
"that setuptools is always imported before distutils."
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def clear_distutils():
|
|
||||||
if 'distutils' not in sys.modules:
|
|
||||||
return
|
|
||||||
import warnings
|
|
||||||
|
|
||||||
warnings.warn("Setuptools is replacing distutils.")
|
|
||||||
mods = [
|
|
||||||
name
|
|
||||||
for name in sys.modules
|
|
||||||
if name == "distutils" or name.startswith("distutils.")
|
|
||||||
]
|
|
||||||
for name in mods:
|
|
||||||
del sys.modules[name]
|
|
||||||
|
|
||||||
|
|
||||||
def enabled():
|
|
||||||
"""
|
|
||||||
Allow selection of distutils by environment variable.
|
|
||||||
"""
|
|
||||||
which = os.environ.get('SETUPTOOLS_USE_DISTUTILS', 'local')
|
|
||||||
return which == 'local'
|
|
||||||
|
|
||||||
|
|
||||||
def ensure_local_distutils():
|
|
||||||
import importlib
|
|
||||||
|
|
||||||
clear_distutils()
|
|
||||||
|
|
||||||
# With the DistutilsMetaFinder in place,
|
|
||||||
# perform an import to cause distutils to be
|
|
||||||
# loaded from setuptools._distutils. Ref #2906.
|
|
||||||
with shim():
|
|
||||||
importlib.import_module('distutils')
|
|
||||||
|
|
||||||
# check that submodules load as expected
|
|
||||||
core = importlib.import_module('distutils.core')
|
|
||||||
assert '_distutils' in core.__file__, core.__file__
|
|
||||||
assert 'setuptools._distutils.log' not in sys.modules
|
|
||||||
|
|
||||||
|
|
||||||
def do_override():
|
|
||||||
"""
|
|
||||||
Ensure that the local copy of distutils is preferred over stdlib.
|
|
||||||
|
|
||||||
See https://github.com/pypa/setuptools/issues/417#issuecomment-392298401
|
|
||||||
for more motivation.
|
|
||||||
"""
|
|
||||||
if enabled():
|
|
||||||
warn_distutils_present()
|
|
||||||
ensure_local_distutils()
|
|
||||||
|
|
||||||
|
|
||||||
class _TrivialRe:
|
|
||||||
def __init__(self, *patterns):
|
|
||||||
self._patterns = patterns
|
|
||||||
|
|
||||||
def match(self, string):
|
|
||||||
return all(pat in string for pat in self._patterns)
|
|
||||||
|
|
||||||
|
|
||||||
class DistutilsMetaFinder:
|
|
||||||
def find_spec(self, fullname, path, target=None):
|
|
||||||
# optimization: only consider top level modules and those
|
|
||||||
# found in the CPython test suite.
|
|
||||||
if path is not None and not fullname.startswith('test.'):
|
|
||||||
return
|
|
||||||
|
|
||||||
method_name = 'spec_for_{fullname}'.format(**locals())
|
|
||||||
method = getattr(self, method_name, lambda: None)
|
|
||||||
return method()
|
|
||||||
|
|
||||||
def spec_for_distutils(self):
|
|
||||||
if self.is_cpython():
|
|
||||||
return
|
|
||||||
|
|
||||||
import importlib
|
|
||||||
import importlib.abc
|
|
||||||
import importlib.util
|
|
||||||
|
|
||||||
try:
|
|
||||||
mod = importlib.import_module('setuptools._distutils')
|
|
||||||
except Exception:
|
|
||||||
# There are a couple of cases where setuptools._distutils
|
|
||||||
# may not be present:
|
|
||||||
# - An older Setuptools without a local distutils is
|
|
||||||
# taking precedence. Ref #2957.
|
|
||||||
# - Path manipulation during sitecustomize removes
|
|
||||||
# setuptools from the path but only after the hook
|
|
||||||
# has been loaded. Ref #2980.
|
|
||||||
# In either case, fall back to stdlib behavior.
|
|
||||||
return
|
|
||||||
|
|
||||||
class DistutilsLoader(importlib.abc.Loader):
|
|
||||||
def create_module(self, spec):
|
|
||||||
mod.__name__ = 'distutils'
|
|
||||||
return mod
|
|
||||||
|
|
||||||
def exec_module(self, module):
|
|
||||||
pass
|
|
||||||
|
|
||||||
return importlib.util.spec_from_loader(
|
|
||||||
'distutils', DistutilsLoader(), origin=mod.__file__
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def is_cpython():
|
|
||||||
"""
|
|
||||||
Suppress supplying distutils for CPython (build and tests).
|
|
||||||
Ref #2965 and #3007.
|
|
||||||
"""
|
|
||||||
return os.path.isfile('pybuilddir.txt')
|
|
||||||
|
|
||||||
def spec_for_pip(self):
|
|
||||||
"""
|
|
||||||
Ensure stdlib distutils when running under pip.
|
|
||||||
See pypa/pip#8761 for rationale.
|
|
||||||
"""
|
|
||||||
if sys.version_info >= (3, 12) or self.pip_imported_during_build():
|
|
||||||
return
|
|
||||||
clear_distutils()
|
|
||||||
self.spec_for_distutils = lambda: None
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def pip_imported_during_build(cls):
|
|
||||||
"""
|
|
||||||
Detect if pip is being imported in a build script. Ref #2355.
|
|
||||||
"""
|
|
||||||
import traceback
|
|
||||||
|
|
||||||
return any(
|
|
||||||
cls.frame_file_is_setup(frame) for frame, line in traceback.walk_stack(None)
|
|
||||||
)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def frame_file_is_setup(frame):
|
|
||||||
"""
|
|
||||||
Return True if the indicated frame suggests a setup.py file.
|
|
||||||
"""
|
|
||||||
# some frames may not have __file__ (#2940)
|
|
||||||
return frame.f_globals.get('__file__', '').endswith('setup.py')
|
|
||||||
|
|
||||||
def spec_for_sensitive_tests(self):
|
|
||||||
"""
|
|
||||||
Ensure stdlib distutils when running select tests under CPython.
|
|
||||||
|
|
||||||
python/cpython#91169
|
|
||||||
"""
|
|
||||||
clear_distutils()
|
|
||||||
self.spec_for_distutils = lambda: None
|
|
||||||
|
|
||||||
sensitive_tests = (
|
|
||||||
[
|
|
||||||
'test.test_distutils',
|
|
||||||
'test.test_peg_generator',
|
|
||||||
'test.test_importlib',
|
|
||||||
]
|
|
||||||
if sys.version_info < (3, 10)
|
|
||||||
else [
|
|
||||||
'test.test_distutils',
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
for name in DistutilsMetaFinder.sensitive_tests:
|
|
||||||
setattr(
|
|
||||||
DistutilsMetaFinder,
|
|
||||||
f'spec_for_{name}',
|
|
||||||
DistutilsMetaFinder.spec_for_sensitive_tests,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
DISTUTILS_FINDER = DistutilsMetaFinder()
|
|
||||||
|
|
||||||
|
|
||||||
def add_shim():
|
|
||||||
DISTUTILS_FINDER in sys.meta_path or insert_shim()
|
|
||||||
|
|
||||||
|
|
||||||
class shim:
|
|
||||||
def __enter__(self):
|
|
||||||
insert_shim()
|
|
||||||
|
|
||||||
def __exit__(self, exc, value, tb):
|
|
||||||
_remove_shim()
|
|
||||||
|
|
||||||
|
|
||||||
def insert_shim():
|
|
||||||
sys.meta_path.insert(0, DISTUTILS_FINDER)
|
|
||||||
|
|
||||||
|
|
||||||
def _remove_shim():
|
|
||||||
try:
|
|
||||||
sys.meta_path.remove(DISTUTILS_FINDER)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
if sys.version_info < (3, 12):
|
|
||||||
# DistutilsMetaFinder can only be disabled in Python < 3.12 (PEP 632)
|
|
||||||
remove_shim = _remove_shim
|
|
||||||
Binary file not shown.
Binary file not shown.
@@ -1 +0,0 @@
|
|||||||
__import__('_distutils_hack').do_override()
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
pip
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
The MIT License (MIT)
|
|
||||||
|
|
||||||
Copyright (c) 2018 Alex Grönholm
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
|
||||||
this software and associated documentation files (the "Software"), to deal in
|
|
||||||
the Software without restriction, including without limitation the rights to
|
|
||||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
|
||||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
|
||||||
subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
|
||||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
|
||||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
|
||||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
|
||||||
@@ -1,105 +0,0 @@
|
|||||||
Metadata-Version: 2.1
|
|
||||||
Name: anyio
|
|
||||||
Version: 3.7.1
|
|
||||||
Summary: High level compatibility layer for multiple asynchronous event loop implementations
|
|
||||||
Author-email: Alex Grönholm <alex.gronholm@nextday.fi>
|
|
||||||
License: MIT
|
|
||||||
Project-URL: Documentation, https://anyio.readthedocs.io/en/latest/
|
|
||||||
Project-URL: Changelog, https://anyio.readthedocs.io/en/stable/versionhistory.html
|
|
||||||
Project-URL: Source code, https://github.com/agronholm/anyio
|
|
||||||
Project-URL: Issue tracker, https://github.com/agronholm/anyio/issues
|
|
||||||
Classifier: Development Status :: 5 - Production/Stable
|
|
||||||
Classifier: Intended Audience :: Developers
|
|
||||||
Classifier: License :: OSI Approved :: MIT License
|
|
||||||
Classifier: Framework :: AnyIO
|
|
||||||
Classifier: Typing :: Typed
|
|
||||||
Classifier: Programming Language :: Python
|
|
||||||
Classifier: Programming Language :: Python :: 3
|
|
||||||
Classifier: Programming Language :: Python :: 3.7
|
|
||||||
Classifier: Programming Language :: Python :: 3.8
|
|
||||||
Classifier: Programming Language :: Python :: 3.9
|
|
||||||
Classifier: Programming Language :: Python :: 3.10
|
|
||||||
Classifier: Programming Language :: Python :: 3.11
|
|
||||||
Requires-Python: >=3.7
|
|
||||||
Description-Content-Type: text/x-rst
|
|
||||||
License-File: LICENSE
|
|
||||||
Requires-Dist: idna (>=2.8)
|
|
||||||
Requires-Dist: sniffio (>=1.1)
|
|
||||||
Requires-Dist: exceptiongroup ; python_version < "3.11"
|
|
||||||
Requires-Dist: typing-extensions ; python_version < "3.8"
|
|
||||||
Provides-Extra: doc
|
|
||||||
Requires-Dist: packaging ; extra == 'doc'
|
|
||||||
Requires-Dist: Sphinx ; extra == 'doc'
|
|
||||||
Requires-Dist: sphinx-rtd-theme (>=1.2.2) ; extra == 'doc'
|
|
||||||
Requires-Dist: sphinxcontrib-jquery ; extra == 'doc'
|
|
||||||
Requires-Dist: sphinx-autodoc-typehints (>=1.2.0) ; extra == 'doc'
|
|
||||||
Provides-Extra: test
|
|
||||||
Requires-Dist: anyio[trio] ; extra == 'test'
|
|
||||||
Requires-Dist: coverage[toml] (>=4.5) ; extra == 'test'
|
|
||||||
Requires-Dist: hypothesis (>=4.0) ; extra == 'test'
|
|
||||||
Requires-Dist: psutil (>=5.9) ; extra == 'test'
|
|
||||||
Requires-Dist: pytest (>=7.0) ; extra == 'test'
|
|
||||||
Requires-Dist: pytest-mock (>=3.6.1) ; extra == 'test'
|
|
||||||
Requires-Dist: trustme ; extra == 'test'
|
|
||||||
Requires-Dist: uvloop (>=0.17) ; (python_version < "3.12" and platform_python_implementation == "CPython" and platform_system != "Windows") and extra == 'test'
|
|
||||||
Requires-Dist: mock (>=4) ; (python_version < "3.8") and extra == 'test'
|
|
||||||
Provides-Extra: trio
|
|
||||||
Requires-Dist: trio (<0.22) ; extra == 'trio'
|
|
||||||
|
|
||||||
.. image:: https://github.com/agronholm/anyio/actions/workflows/test.yml/badge.svg
|
|
||||||
:target: https://github.com/agronholm/anyio/actions/workflows/test.yml
|
|
||||||
:alt: Build Status
|
|
||||||
.. image:: https://coveralls.io/repos/github/agronholm/anyio/badge.svg?branch=master
|
|
||||||
:target: https://coveralls.io/github/agronholm/anyio?branch=master
|
|
||||||
:alt: Code Coverage
|
|
||||||
.. image:: https://readthedocs.org/projects/anyio/badge/?version=latest
|
|
||||||
:target: https://anyio.readthedocs.io/en/latest/?badge=latest
|
|
||||||
:alt: Documentation
|
|
||||||
.. image:: https://badges.gitter.im/gitterHQ/gitter.svg
|
|
||||||
:target: https://gitter.im/python-trio/AnyIO
|
|
||||||
:alt: Gitter chat
|
|
||||||
|
|
||||||
AnyIO is an asynchronous networking and concurrency library that works on top of either asyncio_ or
|
|
||||||
trio_. It implements trio-like `structured concurrency`_ (SC) on top of asyncio and works in harmony
|
|
||||||
with the native SC of trio itself.
|
|
||||||
|
|
||||||
Applications and libraries written against AnyIO's API will run unmodified on either asyncio_ or
|
|
||||||
trio_. AnyIO can also be adopted into a library or application incrementally – bit by bit, no full
|
|
||||||
refactoring necessary. It will blend in with the native libraries of your chosen backend.
|
|
||||||
|
|
||||||
Documentation
|
|
||||||
-------------
|
|
||||||
|
|
||||||
View full documentation at: https://anyio.readthedocs.io/
|
|
||||||
|
|
||||||
Features
|
|
||||||
--------
|
|
||||||
|
|
||||||
AnyIO offers the following functionality:
|
|
||||||
|
|
||||||
* Task groups (nurseries_ in trio terminology)
|
|
||||||
* High-level networking (TCP, UDP and UNIX sockets)
|
|
||||||
|
|
||||||
* `Happy eyeballs`_ algorithm for TCP connections (more robust than that of asyncio on Python
|
|
||||||
3.8)
|
|
||||||
* async/await style UDP sockets (unlike asyncio where you still have to use Transports and
|
|
||||||
Protocols)
|
|
||||||
|
|
||||||
* A versatile API for byte streams and object streams
|
|
||||||
* Inter-task synchronization and communication (locks, conditions, events, semaphores, object
|
|
||||||
streams)
|
|
||||||
* Worker threads
|
|
||||||
* Subprocesses
|
|
||||||
* Asynchronous file I/O (using worker threads)
|
|
||||||
* Signal handling
|
|
||||||
|
|
||||||
AnyIO also comes with its own pytest_ plugin which also supports asynchronous fixtures.
|
|
||||||
It even works with the popular Hypothesis_ library.
|
|
||||||
|
|
||||||
.. _asyncio: https://docs.python.org/3/library/asyncio.html
|
|
||||||
.. _trio: https://github.com/python-trio/trio
|
|
||||||
.. _structured concurrency: https://en.wikipedia.org/wiki/Structured_concurrency
|
|
||||||
.. _nurseries: https://trio.readthedocs.io/en/stable/reference-core.html#nurseries-and-spawning
|
|
||||||
.. _Happy eyeballs: https://en.wikipedia.org/wiki/Happy_Eyeballs
|
|
||||||
.. _pytest: https://docs.pytest.org/en/latest/
|
|
||||||
.. _Hypothesis: https://hypothesis.works/
|
|
||||||
@@ -1,83 +0,0 @@
|
|||||||
anyio-3.7.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
|
||||||
anyio-3.7.1.dist-info/LICENSE,sha256=U2GsncWPLvX9LpsJxoKXwX8ElQkJu8gCO9uC6s8iwrA,1081
|
|
||||||
anyio-3.7.1.dist-info/METADATA,sha256=mOhfXPB7qKVQh3dUtp2NgLysa10jHWeDBNnRg-93A_c,4708
|
|
||||||
anyio-3.7.1.dist-info/RECORD,,
|
|
||||||
anyio-3.7.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
||||||
anyio-3.7.1.dist-info/WHEEL,sha256=pkctZYzUS4AYVn6dJ-7367OJZivF2e8RA9b_ZBjif18,92
|
|
||||||
anyio-3.7.1.dist-info/entry_points.txt,sha256=_d6Yu6uiaZmNe0CydowirE9Cmg7zUL2g08tQpoS3Qvc,39
|
|
||||||
anyio-3.7.1.dist-info/top_level.txt,sha256=QglSMiWX8_5dpoVAEIHdEYzvqFMdSYWmCj6tYw2ITkQ,6
|
|
||||||
anyio/__init__.py,sha256=Pq9lO03Zm5ynIPlhkquaOuIc1dTTeLGNUQ5HT5qwYMI,4073
|
|
||||||
anyio/__pycache__/__init__.cpython-39.pyc,,
|
|
||||||
anyio/__pycache__/from_thread.cpython-39.pyc,,
|
|
||||||
anyio/__pycache__/lowlevel.cpython-39.pyc,,
|
|
||||||
anyio/__pycache__/pytest_plugin.cpython-39.pyc,,
|
|
||||||
anyio/__pycache__/to_process.cpython-39.pyc,,
|
|
||||||
anyio/__pycache__/to_thread.cpython-39.pyc,,
|
|
||||||
anyio/_backends/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
||||||
anyio/_backends/__pycache__/__init__.cpython-39.pyc,,
|
|
||||||
anyio/_backends/__pycache__/_asyncio.cpython-39.pyc,,
|
|
||||||
anyio/_backends/__pycache__/_trio.cpython-39.pyc,,
|
|
||||||
anyio/_backends/_asyncio.py,sha256=fgwZmYnGOxT_pX0OZTPPgRdFqKLjnKvQUk7tsfuNmfM,67056
|
|
||||||
anyio/_backends/_trio.py,sha256=EJAj0tNi0JRM2y3QWP7oS4ct7wnjMSYDG8IZUWMta-E,30035
|
|
||||||
anyio/_core/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
||||||
anyio/_core/__pycache__/__init__.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_compat.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_eventloop.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_exceptions.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_fileio.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_resources.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_signals.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_sockets.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_streams.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_subprocesses.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_synchronization.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_tasks.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_testing.cpython-39.pyc,,
|
|
||||||
anyio/_core/__pycache__/_typedattr.cpython-39.pyc,,
|
|
||||||
anyio/_core/_compat.py,sha256=XZfBUInEt7jaiTBI2Qbul7EpJdngbwTtG4Qj26un1YE,5726
|
|
||||||
anyio/_core/_eventloop.py,sha256=xJ8KflV1bJ9GAuQRr4o1ojv8wWya4nt_XARta8uLPwc,4083
|
|
||||||
anyio/_core/_exceptions.py,sha256=uOrN5l98o6UrOU6O3kPf0VCDl_zPP-kgZs4IyaLVgwU,2916
|
|
||||||
anyio/_core/_fileio.py,sha256=DWuIul5izCocmJpgqDDNKc_GhMUwayHKdM5R-sbT_A8,18026
|
|
||||||
anyio/_core/_resources.py,sha256=NbmU5O5UX3xEyACnkmYX28Fmwdl-f-ny0tHym26e0w0,435
|
|
||||||
anyio/_core/_signals.py,sha256=KKkZAYL08auydjZnK9S4FQsxx555jT4gXAMcTXdNaok,863
|
|
||||||
anyio/_core/_sockets.py,sha256=szcPd7kKBmlHnx8g_KJWZo2k6syouRNF2614ZrtqiV0,20667
|
|
||||||
anyio/_core/_streams.py,sha256=5gryxQiUisED8uFUAHje5O44RL9wyndNMANzzQWUn1U,1518
|
|
||||||
anyio/_core/_subprocesses.py,sha256=OSAcLAsjfCplXlRyTjWonfS1xU8d5MaZblXYqqY-BM4,4977
|
|
||||||
anyio/_core/_synchronization.py,sha256=Uquo_52vZ7iZzDDoaN_j-N7jeyAlefzOZ8Pxt9mU6gY,16747
|
|
||||||
anyio/_core/_tasks.py,sha256=1wZZWlpDkr6w3kMD629vzJDkPselDvx4XVElgTCVwyM,5316
|
|
||||||
anyio/_core/_testing.py,sha256=7Yll-DOI0uIlIF5VHLUpGGyDPWtDEjFZ85-6ZniwIJU,2217
|
|
||||||
anyio/_core/_typedattr.py,sha256=8o0gwQYSl04zlO9uHqcHu1T6hOw7peY9NW1mOX5DKnY,2551
|
|
||||||
anyio/abc/__init__.py,sha256=UkC-KDbyIoKeDUDhJciwANSoyzz_qaFh4Fb7_AvwjZc,2159
|
|
||||||
anyio/abc/__pycache__/__init__.cpython-39.pyc,,
|
|
||||||
anyio/abc/__pycache__/_resources.cpython-39.pyc,,
|
|
||||||
anyio/abc/__pycache__/_sockets.cpython-39.pyc,,
|
|
||||||
anyio/abc/__pycache__/_streams.cpython-39.pyc,,
|
|
||||||
anyio/abc/__pycache__/_subprocesses.cpython-39.pyc,,
|
|
||||||
anyio/abc/__pycache__/_tasks.cpython-39.pyc,,
|
|
||||||
anyio/abc/__pycache__/_testing.cpython-39.pyc,,
|
|
||||||
anyio/abc/_resources.py,sha256=h1rkzr3E0MFqdXLh9aLLXe-A5W7k_Jc-5XzNr6SJ4w4,763
|
|
||||||
anyio/abc/_sockets.py,sha256=WWYJ6HndKCEuvobAPDkmX0tjwN2FOxf3eTGb1DB7wHE,5243
|
|
||||||
anyio/abc/_streams.py,sha256=yGhOmlVI3W9whmzPuewwYQ2BrKhrUFuWZ4zpVLWOK84,6584
|
|
||||||
anyio/abc/_subprocesses.py,sha256=r-totaRbFX6kKV-4WTeuswz8n01aap8cvkYVQCRKN0M,2067
|
|
||||||
anyio/abc/_tasks.py,sha256=a_5DLyiCbp0K57LJPOyF-PZyXmUcv_p9VRXPFj_K03M,3413
|
|
||||||
anyio/abc/_testing.py,sha256=Eub7gXJ0tVPo_WN5iJAw10FrvC7C1uaL3b2neGr_pfs,1924
|
|
||||||
anyio/from_thread.py,sha256=aUVKXctPgZ5wK3p5VTyrtjDj9tSQSrH6xCjBuo-hv3A,16563
|
|
||||||
anyio/lowlevel.py,sha256=cOTncxRW5KeswqYQQdp0pfAw6OFWXius1SPhCYwHZL4,4647
|
|
||||||
anyio/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
||||||
anyio/pytest_plugin.py,sha256=_Txgl0-I3kO1rk_KATXmIUV57C34hajcJCGcgV26CU0,5022
|
|
||||||
anyio/streams/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
||||||
anyio/streams/__pycache__/__init__.cpython-39.pyc,,
|
|
||||||
anyio/streams/__pycache__/buffered.cpython-39.pyc,,
|
|
||||||
anyio/streams/__pycache__/file.cpython-39.pyc,,
|
|
||||||
anyio/streams/__pycache__/memory.cpython-39.pyc,,
|
|
||||||
anyio/streams/__pycache__/stapled.cpython-39.pyc,,
|
|
||||||
anyio/streams/__pycache__/text.cpython-39.pyc,,
|
|
||||||
anyio/streams/__pycache__/tls.cpython-39.pyc,,
|
|
||||||
anyio/streams/buffered.py,sha256=2ifplNLwT73d1UKBxrkFdlC9wTAze9LhPL7pt_7cYgY,4473
|
|
||||||
anyio/streams/file.py,sha256=-NP6jMcUd2f1VJwgcxgiRHdEsNnhE0lANl0ov_i7FrE,4356
|
|
||||||
anyio/streams/memory.py,sha256=QZhc5qdomBpGCgrUVWAaqEBxI0oklVxK_62atW6tnNk,9274
|
|
||||||
anyio/streams/stapled.py,sha256=9u2GxpiOPsGtgO1qsj2tVoW4b8bgiwp5rSDs1BFKkLM,4275
|
|
||||||
anyio/streams/text.py,sha256=1K4ZCLKl2b7yywrW6wKEeMu3xyQHE_T0aU5_oC9GPTE,5043
|
|
||||||
anyio/streams/tls.py,sha256=TbdCz1KtfEnp3mxHvkROXRefhE6S1LHiwgWiJX8zYaU,12099
|
|
||||||
anyio/to_process.py,sha256=_RSsG8UME2nGxeFEdg3OEfv9XshSQwrMU7DAbwWGx9U,9242
|
|
||||||
anyio/to_thread.py,sha256=HVpTvBei2sSXgJJeNKdwhJwQaW76LDbb1htQ-Mc6zDs,2146
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
Wheel-Version: 1.0
|
|
||||||
Generator: bdist_wheel (0.40.0)
|
|
||||||
Root-Is-Purelib: true
|
|
||||||
Tag: py3-none-any
|
|
||||||
|
|
||||||
@@ -1,2 +0,0 @@
|
|||||||
[pytest11]
|
|
||||||
anyio = anyio.pytest_plugin
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
anyio
|
|
||||||
@@ -1,169 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
__all__ = (
|
|
||||||
"maybe_async",
|
|
||||||
"maybe_async_cm",
|
|
||||||
"run",
|
|
||||||
"sleep",
|
|
||||||
"sleep_forever",
|
|
||||||
"sleep_until",
|
|
||||||
"current_time",
|
|
||||||
"get_all_backends",
|
|
||||||
"get_cancelled_exc_class",
|
|
||||||
"BrokenResourceError",
|
|
||||||
"BrokenWorkerProcess",
|
|
||||||
"BusyResourceError",
|
|
||||||
"ClosedResourceError",
|
|
||||||
"DelimiterNotFound",
|
|
||||||
"EndOfStream",
|
|
||||||
"ExceptionGroup",
|
|
||||||
"IncompleteRead",
|
|
||||||
"TypedAttributeLookupError",
|
|
||||||
"WouldBlock",
|
|
||||||
"AsyncFile",
|
|
||||||
"Path",
|
|
||||||
"open_file",
|
|
||||||
"wrap_file",
|
|
||||||
"aclose_forcefully",
|
|
||||||
"open_signal_receiver",
|
|
||||||
"connect_tcp",
|
|
||||||
"connect_unix",
|
|
||||||
"create_tcp_listener",
|
|
||||||
"create_unix_listener",
|
|
||||||
"create_udp_socket",
|
|
||||||
"create_connected_udp_socket",
|
|
||||||
"getaddrinfo",
|
|
||||||
"getnameinfo",
|
|
||||||
"wait_socket_readable",
|
|
||||||
"wait_socket_writable",
|
|
||||||
"create_memory_object_stream",
|
|
||||||
"run_process",
|
|
||||||
"open_process",
|
|
||||||
"create_lock",
|
|
||||||
"CapacityLimiter",
|
|
||||||
"CapacityLimiterStatistics",
|
|
||||||
"Condition",
|
|
||||||
"ConditionStatistics",
|
|
||||||
"Event",
|
|
||||||
"EventStatistics",
|
|
||||||
"Lock",
|
|
||||||
"LockStatistics",
|
|
||||||
"Semaphore",
|
|
||||||
"SemaphoreStatistics",
|
|
||||||
"create_condition",
|
|
||||||
"create_event",
|
|
||||||
"create_semaphore",
|
|
||||||
"create_capacity_limiter",
|
|
||||||
"open_cancel_scope",
|
|
||||||
"fail_after",
|
|
||||||
"move_on_after",
|
|
||||||
"current_effective_deadline",
|
|
||||||
"TASK_STATUS_IGNORED",
|
|
||||||
"CancelScope",
|
|
||||||
"create_task_group",
|
|
||||||
"TaskInfo",
|
|
||||||
"get_current_task",
|
|
||||||
"get_running_tasks",
|
|
||||||
"wait_all_tasks_blocked",
|
|
||||||
"run_sync_in_worker_thread",
|
|
||||||
"run_async_from_thread",
|
|
||||||
"run_sync_from_thread",
|
|
||||||
"current_default_worker_thread_limiter",
|
|
||||||
"create_blocking_portal",
|
|
||||||
"start_blocking_portal",
|
|
||||||
"typed_attribute",
|
|
||||||
"TypedAttributeSet",
|
|
||||||
"TypedAttributeProvider",
|
|
||||||
)
|
|
||||||
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from ._core._compat import maybe_async, maybe_async_cm
|
|
||||||
from ._core._eventloop import (
|
|
||||||
current_time,
|
|
||||||
get_all_backends,
|
|
||||||
get_cancelled_exc_class,
|
|
||||||
run,
|
|
||||||
sleep,
|
|
||||||
sleep_forever,
|
|
||||||
sleep_until,
|
|
||||||
)
|
|
||||||
from ._core._exceptions import (
|
|
||||||
BrokenResourceError,
|
|
||||||
BrokenWorkerProcess,
|
|
||||||
BusyResourceError,
|
|
||||||
ClosedResourceError,
|
|
||||||
DelimiterNotFound,
|
|
||||||
EndOfStream,
|
|
||||||
ExceptionGroup,
|
|
||||||
IncompleteRead,
|
|
||||||
TypedAttributeLookupError,
|
|
||||||
WouldBlock,
|
|
||||||
)
|
|
||||||
from ._core._fileio import AsyncFile, Path, open_file, wrap_file
|
|
||||||
from ._core._resources import aclose_forcefully
|
|
||||||
from ._core._signals import open_signal_receiver
|
|
||||||
from ._core._sockets import (
|
|
||||||
connect_tcp,
|
|
||||||
connect_unix,
|
|
||||||
create_connected_udp_socket,
|
|
||||||
create_tcp_listener,
|
|
||||||
create_udp_socket,
|
|
||||||
create_unix_listener,
|
|
||||||
getaddrinfo,
|
|
||||||
getnameinfo,
|
|
||||||
wait_socket_readable,
|
|
||||||
wait_socket_writable,
|
|
||||||
)
|
|
||||||
from ._core._streams import create_memory_object_stream
|
|
||||||
from ._core._subprocesses import open_process, run_process
|
|
||||||
from ._core._synchronization import (
|
|
||||||
CapacityLimiter,
|
|
||||||
CapacityLimiterStatistics,
|
|
||||||
Condition,
|
|
||||||
ConditionStatistics,
|
|
||||||
Event,
|
|
||||||
EventStatistics,
|
|
||||||
Lock,
|
|
||||||
LockStatistics,
|
|
||||||
Semaphore,
|
|
||||||
SemaphoreStatistics,
|
|
||||||
create_capacity_limiter,
|
|
||||||
create_condition,
|
|
||||||
create_event,
|
|
||||||
create_lock,
|
|
||||||
create_semaphore,
|
|
||||||
)
|
|
||||||
from ._core._tasks import (
|
|
||||||
TASK_STATUS_IGNORED,
|
|
||||||
CancelScope,
|
|
||||||
create_task_group,
|
|
||||||
current_effective_deadline,
|
|
||||||
fail_after,
|
|
||||||
move_on_after,
|
|
||||||
open_cancel_scope,
|
|
||||||
)
|
|
||||||
from ._core._testing import (
|
|
||||||
TaskInfo,
|
|
||||||
get_current_task,
|
|
||||||
get_running_tasks,
|
|
||||||
wait_all_tasks_blocked,
|
|
||||||
)
|
|
||||||
from ._core._typedattr import TypedAttributeProvider, TypedAttributeSet, typed_attribute
|
|
||||||
|
|
||||||
# Re-exported here, for backwards compatibility
|
|
||||||
# isort: off
|
|
||||||
from .to_thread import current_default_worker_thread_limiter, run_sync_in_worker_thread
|
|
||||||
from .from_thread import (
|
|
||||||
create_blocking_portal,
|
|
||||||
run_async_from_thread,
|
|
||||||
run_sync_from_thread,
|
|
||||||
start_blocking_portal,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Re-export imports so they look like they live directly in this package
|
|
||||||
key: str
|
|
||||||
value: Any
|
|
||||||
for key, value in list(locals().items()):
|
|
||||||
if getattr(value, "__module__", "").startswith("anyio."):
|
|
||||||
value.__module__ = __name__
|
|
||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
File diff suppressed because it is too large
Load Diff
@@ -1,996 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import array
|
|
||||||
import math
|
|
||||||
import socket
|
|
||||||
from concurrent.futures import Future
|
|
||||||
from contextvars import copy_context
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from functools import partial
|
|
||||||
from io import IOBase
|
|
||||||
from os import PathLike
|
|
||||||
from signal import Signals
|
|
||||||
from types import TracebackType
|
|
||||||
from typing import (
|
|
||||||
IO,
|
|
||||||
TYPE_CHECKING,
|
|
||||||
Any,
|
|
||||||
AsyncGenerator,
|
|
||||||
AsyncIterator,
|
|
||||||
Awaitable,
|
|
||||||
Callable,
|
|
||||||
Collection,
|
|
||||||
Coroutine,
|
|
||||||
Generic,
|
|
||||||
Iterable,
|
|
||||||
Mapping,
|
|
||||||
NoReturn,
|
|
||||||
Sequence,
|
|
||||||
TypeVar,
|
|
||||||
cast,
|
|
||||||
)
|
|
||||||
|
|
||||||
import sniffio
|
|
||||||
import trio.from_thread
|
|
||||||
from outcome import Error, Outcome, Value
|
|
||||||
from trio.socket import SocketType as TrioSocketType
|
|
||||||
from trio.to_thread import run_sync
|
|
||||||
|
|
||||||
from .. import CapacityLimiterStatistics, EventStatistics, TaskInfo, abc
|
|
||||||
from .._core._compat import DeprecatedAsyncContextManager, DeprecatedAwaitable
|
|
||||||
from .._core._eventloop import claim_worker_thread
|
|
||||||
from .._core._exceptions import (
|
|
||||||
BrokenResourceError,
|
|
||||||
BusyResourceError,
|
|
||||||
ClosedResourceError,
|
|
||||||
EndOfStream,
|
|
||||||
)
|
|
||||||
from .._core._exceptions import ExceptionGroup as BaseExceptionGroup
|
|
||||||
from .._core._sockets import convert_ipv6_sockaddr
|
|
||||||
from .._core._synchronization import CapacityLimiter as BaseCapacityLimiter
|
|
||||||
from .._core._synchronization import Event as BaseEvent
|
|
||||||
from .._core._synchronization import ResourceGuard
|
|
||||||
from .._core._tasks import CancelScope as BaseCancelScope
|
|
||||||
from ..abc import IPSockAddrType, UDPPacketType
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from trio_typing import TaskStatus
|
|
||||||
|
|
||||||
try:
|
|
||||||
from trio import lowlevel as trio_lowlevel
|
|
||||||
except ImportError:
|
|
||||||
from trio import hazmat as trio_lowlevel # type: ignore[no-redef]
|
|
||||||
from trio.hazmat import wait_readable, wait_writable
|
|
||||||
else:
|
|
||||||
from trio.lowlevel import wait_readable, wait_writable
|
|
||||||
|
|
||||||
try:
|
|
||||||
trio_open_process = trio_lowlevel.open_process
|
|
||||||
except AttributeError:
|
|
||||||
# isort: off
|
|
||||||
from trio import ( # type: ignore[attr-defined, no-redef]
|
|
||||||
open_process as trio_open_process,
|
|
||||||
)
|
|
||||||
|
|
||||||
T_Retval = TypeVar("T_Retval")
|
|
||||||
T_SockAddr = TypeVar("T_SockAddr", str, IPSockAddrType)
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Event loop
|
|
||||||
#
|
|
||||||
|
|
||||||
run = trio.run
|
|
||||||
current_token = trio.lowlevel.current_trio_token
|
|
||||||
RunVar = trio.lowlevel.RunVar
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Miscellaneous
|
|
||||||
#
|
|
||||||
|
|
||||||
sleep = trio.sleep
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Timeouts and cancellation
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
class CancelScope(BaseCancelScope):
|
|
||||||
def __new__(
|
|
||||||
cls, original: trio.CancelScope | None = None, **kwargs: object
|
|
||||||
) -> CancelScope:
|
|
||||||
return object.__new__(cls)
|
|
||||||
|
|
||||||
def __init__(self, original: trio.CancelScope | None = None, **kwargs: Any) -> None:
|
|
||||||
self.__original = original or trio.CancelScope(**kwargs)
|
|
||||||
|
|
||||||
def __enter__(self) -> CancelScope:
|
|
||||||
self.__original.__enter__()
|
|
||||||
return self
|
|
||||||
|
|
||||||
def __exit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
# https://github.com/python-trio/trio-typing/pull/79
|
|
||||||
return self.__original.__exit__( # type: ignore[func-returns-value]
|
|
||||||
exc_type, exc_val, exc_tb
|
|
||||||
)
|
|
||||||
|
|
||||||
def cancel(self) -> DeprecatedAwaitable:
|
|
||||||
self.__original.cancel()
|
|
||||||
return DeprecatedAwaitable(self.cancel)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def deadline(self) -> float:
|
|
||||||
return self.__original.deadline
|
|
||||||
|
|
||||||
@deadline.setter
|
|
||||||
def deadline(self, value: float) -> None:
|
|
||||||
self.__original.deadline = value
|
|
||||||
|
|
||||||
@property
|
|
||||||
def cancel_called(self) -> bool:
|
|
||||||
return self.__original.cancel_called
|
|
||||||
|
|
||||||
@property
|
|
||||||
def shield(self) -> bool:
|
|
||||||
return self.__original.shield
|
|
||||||
|
|
||||||
@shield.setter
|
|
||||||
def shield(self, value: bool) -> None:
|
|
||||||
self.__original.shield = value
|
|
||||||
|
|
||||||
|
|
||||||
CancelledError = trio.Cancelled
|
|
||||||
checkpoint = trio.lowlevel.checkpoint
|
|
||||||
checkpoint_if_cancelled = trio.lowlevel.checkpoint_if_cancelled
|
|
||||||
cancel_shielded_checkpoint = trio.lowlevel.cancel_shielded_checkpoint
|
|
||||||
current_effective_deadline = trio.current_effective_deadline
|
|
||||||
current_time = trio.current_time
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Task groups
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
class ExceptionGroup(BaseExceptionGroup, trio.MultiError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class TaskGroup(abc.TaskGroup):
|
|
||||||
def __init__(self) -> None:
|
|
||||||
self._active = False
|
|
||||||
self._nursery_manager = trio.open_nursery()
|
|
||||||
self.cancel_scope = None # type: ignore[assignment]
|
|
||||||
|
|
||||||
async def __aenter__(self) -> TaskGroup:
|
|
||||||
self._active = True
|
|
||||||
self._nursery = await self._nursery_manager.__aenter__()
|
|
||||||
self.cancel_scope = CancelScope(self._nursery.cancel_scope)
|
|
||||||
return self
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
try:
|
|
||||||
return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb)
|
|
||||||
except trio.MultiError as exc:
|
|
||||||
raise ExceptionGroup(exc.exceptions) from None
|
|
||||||
finally:
|
|
||||||
self._active = False
|
|
||||||
|
|
||||||
def start_soon(
|
|
||||||
self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None
|
|
||||||
) -> None:
|
|
||||||
if not self._active:
|
|
||||||
raise RuntimeError(
|
|
||||||
"This task group is not active; no new tasks can be started."
|
|
||||||
)
|
|
||||||
|
|
||||||
self._nursery.start_soon(func, *args, name=name)
|
|
||||||
|
|
||||||
async def start(
|
|
||||||
self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None
|
|
||||||
) -> object:
|
|
||||||
if not self._active:
|
|
||||||
raise RuntimeError(
|
|
||||||
"This task group is not active; no new tasks can be started."
|
|
||||||
)
|
|
||||||
|
|
||||||
return await self._nursery.start(func, *args, name=name)
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Threads
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
async def run_sync_in_worker_thread(
|
|
||||||
func: Callable[..., T_Retval],
|
|
||||||
*args: object,
|
|
||||||
cancellable: bool = False,
|
|
||||||
limiter: trio.CapacityLimiter | None = None,
|
|
||||||
) -> T_Retval:
|
|
||||||
def wrapper() -> T_Retval:
|
|
||||||
with claim_worker_thread("trio"):
|
|
||||||
return func(*args)
|
|
||||||
|
|
||||||
# TODO: remove explicit context copying when trio 0.20 is the minimum requirement
|
|
||||||
context = copy_context()
|
|
||||||
context.run(sniffio.current_async_library_cvar.set, None)
|
|
||||||
return await run_sync(
|
|
||||||
context.run, wrapper, cancellable=cancellable, limiter=limiter
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: remove this workaround when trio 0.20 is the minimum requirement
|
|
||||||
def run_async_from_thread(
|
|
||||||
fn: Callable[..., Awaitable[T_Retval]], *args: Any
|
|
||||||
) -> T_Retval:
|
|
||||||
async def wrapper() -> T_Retval:
|
|
||||||
retval: T_Retval
|
|
||||||
|
|
||||||
async def inner() -> None:
|
|
||||||
nonlocal retval
|
|
||||||
__tracebackhide__ = True
|
|
||||||
retval = await fn(*args)
|
|
||||||
|
|
||||||
async with trio.open_nursery() as n:
|
|
||||||
context.run(n.start_soon, inner)
|
|
||||||
|
|
||||||
__tracebackhide__ = True
|
|
||||||
return retval # noqa: F821
|
|
||||||
|
|
||||||
context = copy_context()
|
|
||||||
context.run(sniffio.current_async_library_cvar.set, "trio")
|
|
||||||
return trio.from_thread.run(wrapper)
|
|
||||||
|
|
||||||
|
|
||||||
def run_sync_from_thread(fn: Callable[..., T_Retval], *args: Any) -> T_Retval:
|
|
||||||
# TODO: remove explicit context copying when trio 0.20 is the minimum requirement
|
|
||||||
retval = trio.from_thread.run_sync(copy_context().run, fn, *args)
|
|
||||||
return cast(T_Retval, retval)
|
|
||||||
|
|
||||||
|
|
||||||
class BlockingPortal(abc.BlockingPortal):
|
|
||||||
def __new__(cls) -> BlockingPortal:
|
|
||||||
return object.__new__(cls)
|
|
||||||
|
|
||||||
def __init__(self) -> None:
|
|
||||||
super().__init__()
|
|
||||||
self._token = trio.lowlevel.current_trio_token()
|
|
||||||
|
|
||||||
def _spawn_task_from_thread(
|
|
||||||
self,
|
|
||||||
func: Callable,
|
|
||||||
args: tuple,
|
|
||||||
kwargs: dict[str, Any],
|
|
||||||
name: object,
|
|
||||||
future: Future,
|
|
||||||
) -> None:
|
|
||||||
context = copy_context()
|
|
||||||
context.run(sniffio.current_async_library_cvar.set, "trio")
|
|
||||||
trio.from_thread.run_sync(
|
|
||||||
context.run,
|
|
||||||
partial(self._task_group.start_soon, name=name),
|
|
||||||
self._call_func,
|
|
||||||
func,
|
|
||||||
args,
|
|
||||||
kwargs,
|
|
||||||
future,
|
|
||||||
trio_token=self._token,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Subprocesses
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(eq=False)
|
|
||||||
class ReceiveStreamWrapper(abc.ByteReceiveStream):
|
|
||||||
_stream: trio.abc.ReceiveStream
|
|
||||||
|
|
||||||
async def receive(self, max_bytes: int | None = None) -> bytes:
|
|
||||||
try:
|
|
||||||
data = await self._stream.receive_some(max_bytes)
|
|
||||||
except trio.ClosedResourceError as exc:
|
|
||||||
raise ClosedResourceError from exc.__cause__
|
|
||||||
except trio.BrokenResourceError as exc:
|
|
||||||
raise BrokenResourceError from exc.__cause__
|
|
||||||
|
|
||||||
if data:
|
|
||||||
return data
|
|
||||||
else:
|
|
||||||
raise EndOfStream
|
|
||||||
|
|
||||||
async def aclose(self) -> None:
|
|
||||||
await self._stream.aclose()
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(eq=False)
|
|
||||||
class SendStreamWrapper(abc.ByteSendStream):
|
|
||||||
_stream: trio.abc.SendStream
|
|
||||||
|
|
||||||
async def send(self, item: bytes) -> None:
|
|
||||||
try:
|
|
||||||
await self._stream.send_all(item)
|
|
||||||
except trio.ClosedResourceError as exc:
|
|
||||||
raise ClosedResourceError from exc.__cause__
|
|
||||||
except trio.BrokenResourceError as exc:
|
|
||||||
raise BrokenResourceError from exc.__cause__
|
|
||||||
|
|
||||||
async def aclose(self) -> None:
|
|
||||||
await self._stream.aclose()
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(eq=False)
|
|
||||||
class Process(abc.Process):
|
|
||||||
_process: trio.Process
|
|
||||||
_stdin: abc.ByteSendStream | None
|
|
||||||
_stdout: abc.ByteReceiveStream | None
|
|
||||||
_stderr: abc.ByteReceiveStream | None
|
|
||||||
|
|
||||||
async def aclose(self) -> None:
|
|
||||||
if self._stdin:
|
|
||||||
await self._stdin.aclose()
|
|
||||||
if self._stdout:
|
|
||||||
await self._stdout.aclose()
|
|
||||||
if self._stderr:
|
|
||||||
await self._stderr.aclose()
|
|
||||||
|
|
||||||
await self.wait()
|
|
||||||
|
|
||||||
async def wait(self) -> int:
|
|
||||||
return await self._process.wait()
|
|
||||||
|
|
||||||
def terminate(self) -> None:
|
|
||||||
self._process.terminate()
|
|
||||||
|
|
||||||
def kill(self) -> None:
|
|
||||||
self._process.kill()
|
|
||||||
|
|
||||||
def send_signal(self, signal: Signals) -> None:
|
|
||||||
self._process.send_signal(signal)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def pid(self) -> int:
|
|
||||||
return self._process.pid
|
|
||||||
|
|
||||||
@property
|
|
||||||
def returncode(self) -> int | None:
|
|
||||||
return self._process.returncode
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stdin(self) -> abc.ByteSendStream | None:
|
|
||||||
return self._stdin
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stdout(self) -> abc.ByteReceiveStream | None:
|
|
||||||
return self._stdout
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stderr(self) -> abc.ByteReceiveStream | None:
|
|
||||||
return self._stderr
|
|
||||||
|
|
||||||
|
|
||||||
async def open_process(
|
|
||||||
command: str | bytes | Sequence[str | bytes],
|
|
||||||
*,
|
|
||||||
shell: bool,
|
|
||||||
stdin: int | IO[Any] | None,
|
|
||||||
stdout: int | IO[Any] | None,
|
|
||||||
stderr: int | IO[Any] | None,
|
|
||||||
cwd: str | bytes | PathLike | None = None,
|
|
||||||
env: Mapping[str, str] | None = None,
|
|
||||||
start_new_session: bool = False,
|
|
||||||
) -> Process:
|
|
||||||
process = await trio_open_process( # type: ignore[misc]
|
|
||||||
command, # type: ignore[arg-type]
|
|
||||||
stdin=stdin,
|
|
||||||
stdout=stdout,
|
|
||||||
stderr=stderr,
|
|
||||||
shell=shell,
|
|
||||||
cwd=cwd,
|
|
||||||
env=env,
|
|
||||||
start_new_session=start_new_session,
|
|
||||||
)
|
|
||||||
stdin_stream = SendStreamWrapper(process.stdin) if process.stdin else None
|
|
||||||
stdout_stream = ReceiveStreamWrapper(process.stdout) if process.stdout else None
|
|
||||||
stderr_stream = ReceiveStreamWrapper(process.stderr) if process.stderr else None
|
|
||||||
return Process(process, stdin_stream, stdout_stream, stderr_stream)
|
|
||||||
|
|
||||||
|
|
||||||
class _ProcessPoolShutdownInstrument(trio.abc.Instrument):
|
|
||||||
def after_run(self) -> None:
|
|
||||||
super().after_run()
|
|
||||||
|
|
||||||
|
|
||||||
current_default_worker_process_limiter: RunVar = RunVar(
|
|
||||||
"current_default_worker_process_limiter"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
async def _shutdown_process_pool(workers: set[Process]) -> None:
|
|
||||||
process: Process
|
|
||||||
try:
|
|
||||||
await sleep(math.inf)
|
|
||||||
except trio.Cancelled:
|
|
||||||
for process in workers:
|
|
||||||
if process.returncode is None:
|
|
||||||
process.kill()
|
|
||||||
|
|
||||||
with CancelScope(shield=True):
|
|
||||||
for process in workers:
|
|
||||||
await process.aclose()
|
|
||||||
|
|
||||||
|
|
||||||
def setup_process_pool_exit_at_shutdown(workers: set[Process]) -> None:
|
|
||||||
trio.lowlevel.spawn_system_task(_shutdown_process_pool, workers)
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Sockets and networking
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
class _TrioSocketMixin(Generic[T_SockAddr]):
|
|
||||||
def __init__(self, trio_socket: TrioSocketType) -> None:
|
|
||||||
self._trio_socket = trio_socket
|
|
||||||
self._closed = False
|
|
||||||
|
|
||||||
def _check_closed(self) -> None:
|
|
||||||
if self._closed:
|
|
||||||
raise ClosedResourceError
|
|
||||||
if self._trio_socket.fileno() < 0:
|
|
||||||
raise BrokenResourceError
|
|
||||||
|
|
||||||
@property
|
|
||||||
def _raw_socket(self) -> socket.socket:
|
|
||||||
return self._trio_socket._sock # type: ignore[attr-defined]
|
|
||||||
|
|
||||||
async def aclose(self) -> None:
|
|
||||||
if self._trio_socket.fileno() >= 0:
|
|
||||||
self._closed = True
|
|
||||||
self._trio_socket.close()
|
|
||||||
|
|
||||||
def _convert_socket_error(self, exc: BaseException) -> NoReturn:
|
|
||||||
if isinstance(exc, trio.ClosedResourceError):
|
|
||||||
raise ClosedResourceError from exc
|
|
||||||
elif self._trio_socket.fileno() < 0 and self._closed:
|
|
||||||
raise ClosedResourceError from None
|
|
||||||
elif isinstance(exc, OSError):
|
|
||||||
raise BrokenResourceError from exc
|
|
||||||
else:
|
|
||||||
raise exc
|
|
||||||
|
|
||||||
|
|
||||||
class SocketStream(_TrioSocketMixin, abc.SocketStream):
|
|
||||||
def __init__(self, trio_socket: TrioSocketType) -> None:
|
|
||||||
super().__init__(trio_socket)
|
|
||||||
self._receive_guard = ResourceGuard("reading from")
|
|
||||||
self._send_guard = ResourceGuard("writing to")
|
|
||||||
|
|
||||||
async def receive(self, max_bytes: int = 65536) -> bytes:
|
|
||||||
with self._receive_guard:
|
|
||||||
try:
|
|
||||||
data = await self._trio_socket.recv(max_bytes)
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
if data:
|
|
||||||
return data
|
|
||||||
else:
|
|
||||||
raise EndOfStream
|
|
||||||
|
|
||||||
async def send(self, item: bytes) -> None:
|
|
||||||
with self._send_guard:
|
|
||||||
view = memoryview(item)
|
|
||||||
while view:
|
|
||||||
try:
|
|
||||||
bytes_sent = await self._trio_socket.send(view)
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
view = view[bytes_sent:]
|
|
||||||
|
|
||||||
async def send_eof(self) -> None:
|
|
||||||
self._trio_socket.shutdown(socket.SHUT_WR)
|
|
||||||
|
|
||||||
|
|
||||||
class UNIXSocketStream(SocketStream, abc.UNIXSocketStream):
|
|
||||||
async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]:
|
|
||||||
if not isinstance(msglen, int) or msglen < 0:
|
|
||||||
raise ValueError("msglen must be a non-negative integer")
|
|
||||||
if not isinstance(maxfds, int) or maxfds < 1:
|
|
||||||
raise ValueError("maxfds must be a positive integer")
|
|
||||||
|
|
||||||
fds = array.array("i")
|
|
||||||
await checkpoint()
|
|
||||||
with self._receive_guard:
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
message, ancdata, flags, addr = await self._trio_socket.recvmsg(
|
|
||||||
msglen, socket.CMSG_LEN(maxfds * fds.itemsize)
|
|
||||||
)
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
else:
|
|
||||||
if not message and not ancdata:
|
|
||||||
raise EndOfStream
|
|
||||||
|
|
||||||
break
|
|
||||||
|
|
||||||
for cmsg_level, cmsg_type, cmsg_data in ancdata:
|
|
||||||
if cmsg_level != socket.SOL_SOCKET or cmsg_type != socket.SCM_RIGHTS:
|
|
||||||
raise RuntimeError(
|
|
||||||
f"Received unexpected ancillary data; message = {message!r}, "
|
|
||||||
f"cmsg_level = {cmsg_level}, cmsg_type = {cmsg_type}"
|
|
||||||
)
|
|
||||||
|
|
||||||
fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)])
|
|
||||||
|
|
||||||
return message, list(fds)
|
|
||||||
|
|
||||||
async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None:
|
|
||||||
if not message:
|
|
||||||
raise ValueError("message must not be empty")
|
|
||||||
if not fds:
|
|
||||||
raise ValueError("fds must not be empty")
|
|
||||||
|
|
||||||
filenos: list[int] = []
|
|
||||||
for fd in fds:
|
|
||||||
if isinstance(fd, int):
|
|
||||||
filenos.append(fd)
|
|
||||||
elif isinstance(fd, IOBase):
|
|
||||||
filenos.append(fd.fileno())
|
|
||||||
|
|
||||||
fdarray = array.array("i", filenos)
|
|
||||||
await checkpoint()
|
|
||||||
with self._send_guard:
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
await self._trio_socket.sendmsg(
|
|
||||||
[message],
|
|
||||||
[
|
|
||||||
(
|
|
||||||
socket.SOL_SOCKET,
|
|
||||||
socket.SCM_RIGHTS, # type: ignore[list-item]
|
|
||||||
fdarray,
|
|
||||||
)
|
|
||||||
],
|
|
||||||
)
|
|
||||||
break
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
|
|
||||||
class TCPSocketListener(_TrioSocketMixin, abc.SocketListener):
|
|
||||||
def __init__(self, raw_socket: socket.socket):
|
|
||||||
super().__init__(trio.socket.from_stdlib_socket(raw_socket))
|
|
||||||
self._accept_guard = ResourceGuard("accepting connections from")
|
|
||||||
|
|
||||||
async def accept(self) -> SocketStream:
|
|
||||||
with self._accept_guard:
|
|
||||||
try:
|
|
||||||
trio_socket, _addr = await self._trio_socket.accept()
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
|
|
||||||
return SocketStream(trio_socket)
|
|
||||||
|
|
||||||
|
|
||||||
class UNIXSocketListener(_TrioSocketMixin, abc.SocketListener):
|
|
||||||
def __init__(self, raw_socket: socket.socket):
|
|
||||||
super().__init__(trio.socket.from_stdlib_socket(raw_socket))
|
|
||||||
self._accept_guard = ResourceGuard("accepting connections from")
|
|
||||||
|
|
||||||
async def accept(self) -> UNIXSocketStream:
|
|
||||||
with self._accept_guard:
|
|
||||||
try:
|
|
||||||
trio_socket, _addr = await self._trio_socket.accept()
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
return UNIXSocketStream(trio_socket)
|
|
||||||
|
|
||||||
|
|
||||||
class UDPSocket(_TrioSocketMixin[IPSockAddrType], abc.UDPSocket):
|
|
||||||
def __init__(self, trio_socket: TrioSocketType) -> None:
|
|
||||||
super().__init__(trio_socket)
|
|
||||||
self._receive_guard = ResourceGuard("reading from")
|
|
||||||
self._send_guard = ResourceGuard("writing to")
|
|
||||||
|
|
||||||
async def receive(self) -> tuple[bytes, IPSockAddrType]:
|
|
||||||
with self._receive_guard:
|
|
||||||
try:
|
|
||||||
data, addr = await self._trio_socket.recvfrom(65536)
|
|
||||||
return data, convert_ipv6_sockaddr(addr)
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
async def send(self, item: UDPPacketType) -> None:
|
|
||||||
with self._send_guard:
|
|
||||||
try:
|
|
||||||
await self._trio_socket.sendto(*item)
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
|
|
||||||
class ConnectedUDPSocket(_TrioSocketMixin[IPSockAddrType], abc.ConnectedUDPSocket):
|
|
||||||
def __init__(self, trio_socket: TrioSocketType) -> None:
|
|
||||||
super().__init__(trio_socket)
|
|
||||||
self._receive_guard = ResourceGuard("reading from")
|
|
||||||
self._send_guard = ResourceGuard("writing to")
|
|
||||||
|
|
||||||
async def receive(self) -> bytes:
|
|
||||||
with self._receive_guard:
|
|
||||||
try:
|
|
||||||
return await self._trio_socket.recv(65536)
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
async def send(self, item: bytes) -> None:
|
|
||||||
with self._send_guard:
|
|
||||||
try:
|
|
||||||
await self._trio_socket.send(item)
|
|
||||||
except BaseException as exc:
|
|
||||||
self._convert_socket_error(exc)
|
|
||||||
|
|
||||||
|
|
||||||
async def connect_tcp(
|
|
||||||
host: str, port: int, local_address: IPSockAddrType | None = None
|
|
||||||
) -> SocketStream:
|
|
||||||
family = socket.AF_INET6 if ":" in host else socket.AF_INET
|
|
||||||
trio_socket = trio.socket.socket(family)
|
|
||||||
trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
|
|
||||||
if local_address:
|
|
||||||
await trio_socket.bind(local_address)
|
|
||||||
|
|
||||||
try:
|
|
||||||
await trio_socket.connect((host, port))
|
|
||||||
except BaseException:
|
|
||||||
trio_socket.close()
|
|
||||||
raise
|
|
||||||
|
|
||||||
return SocketStream(trio_socket)
|
|
||||||
|
|
||||||
|
|
||||||
async def connect_unix(path: str) -> UNIXSocketStream:
|
|
||||||
trio_socket = trio.socket.socket(socket.AF_UNIX)
|
|
||||||
try:
|
|
||||||
await trio_socket.connect(path)
|
|
||||||
except BaseException:
|
|
||||||
trio_socket.close()
|
|
||||||
raise
|
|
||||||
|
|
||||||
return UNIXSocketStream(trio_socket)
|
|
||||||
|
|
||||||
|
|
||||||
async def create_udp_socket(
|
|
||||||
family: socket.AddressFamily,
|
|
||||||
local_address: IPSockAddrType | None,
|
|
||||||
remote_address: IPSockAddrType | None,
|
|
||||||
reuse_port: bool,
|
|
||||||
) -> UDPSocket | ConnectedUDPSocket:
|
|
||||||
trio_socket = trio.socket.socket(family=family, type=socket.SOCK_DGRAM)
|
|
||||||
|
|
||||||
if reuse_port:
|
|
||||||
trio_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
|
||||||
|
|
||||||
if local_address:
|
|
||||||
await trio_socket.bind(local_address)
|
|
||||||
|
|
||||||
if remote_address:
|
|
||||||
await trio_socket.connect(remote_address)
|
|
||||||
return ConnectedUDPSocket(trio_socket)
|
|
||||||
else:
|
|
||||||
return UDPSocket(trio_socket)
|
|
||||||
|
|
||||||
|
|
||||||
getaddrinfo = trio.socket.getaddrinfo
|
|
||||||
getnameinfo = trio.socket.getnameinfo
|
|
||||||
|
|
||||||
|
|
||||||
async def wait_socket_readable(sock: socket.socket) -> None:
|
|
||||||
try:
|
|
||||||
await wait_readable(sock)
|
|
||||||
except trio.ClosedResourceError as exc:
|
|
||||||
raise ClosedResourceError().with_traceback(exc.__traceback__) from None
|
|
||||||
except trio.BusyResourceError:
|
|
||||||
raise BusyResourceError("reading from") from None
|
|
||||||
|
|
||||||
|
|
||||||
async def wait_socket_writable(sock: socket.socket) -> None:
|
|
||||||
try:
|
|
||||||
await wait_writable(sock)
|
|
||||||
except trio.ClosedResourceError as exc:
|
|
||||||
raise ClosedResourceError().with_traceback(exc.__traceback__) from None
|
|
||||||
except trio.BusyResourceError:
|
|
||||||
raise BusyResourceError("writing to") from None
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Synchronization
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
class Event(BaseEvent):
|
|
||||||
def __new__(cls) -> Event:
|
|
||||||
return object.__new__(cls)
|
|
||||||
|
|
||||||
def __init__(self) -> None:
|
|
||||||
self.__original = trio.Event()
|
|
||||||
|
|
||||||
def is_set(self) -> bool:
|
|
||||||
return self.__original.is_set()
|
|
||||||
|
|
||||||
async def wait(self) -> None:
|
|
||||||
return await self.__original.wait()
|
|
||||||
|
|
||||||
def statistics(self) -> EventStatistics:
|
|
||||||
orig_statistics = self.__original.statistics()
|
|
||||||
return EventStatistics(tasks_waiting=orig_statistics.tasks_waiting)
|
|
||||||
|
|
||||||
def set(self) -> DeprecatedAwaitable:
|
|
||||||
self.__original.set()
|
|
||||||
return DeprecatedAwaitable(self.set)
|
|
||||||
|
|
||||||
|
|
||||||
class CapacityLimiter(BaseCapacityLimiter):
|
|
||||||
def __new__(cls, *args: object, **kwargs: object) -> CapacityLimiter:
|
|
||||||
return object.__new__(cls)
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self, *args: Any, original: trio.CapacityLimiter | None = None
|
|
||||||
) -> None:
|
|
||||||
self.__original = original or trio.CapacityLimiter(*args)
|
|
||||||
|
|
||||||
async def __aenter__(self) -> None:
|
|
||||||
return await self.__original.__aenter__()
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> None:
|
|
||||||
await self.__original.__aexit__(exc_type, exc_val, exc_tb)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def total_tokens(self) -> float:
|
|
||||||
return self.__original.total_tokens
|
|
||||||
|
|
||||||
@total_tokens.setter
|
|
||||||
def total_tokens(self, value: float) -> None:
|
|
||||||
self.__original.total_tokens = value
|
|
||||||
|
|
||||||
@property
|
|
||||||
def borrowed_tokens(self) -> int:
|
|
||||||
return self.__original.borrowed_tokens
|
|
||||||
|
|
||||||
@property
|
|
||||||
def available_tokens(self) -> float:
|
|
||||||
return self.__original.available_tokens
|
|
||||||
|
|
||||||
def acquire_nowait(self) -> DeprecatedAwaitable:
|
|
||||||
self.__original.acquire_nowait()
|
|
||||||
return DeprecatedAwaitable(self.acquire_nowait)
|
|
||||||
|
|
||||||
def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable:
|
|
||||||
self.__original.acquire_on_behalf_of_nowait(borrower)
|
|
||||||
return DeprecatedAwaitable(self.acquire_on_behalf_of_nowait)
|
|
||||||
|
|
||||||
async def acquire(self) -> None:
|
|
||||||
await self.__original.acquire()
|
|
||||||
|
|
||||||
async def acquire_on_behalf_of(self, borrower: object) -> None:
|
|
||||||
await self.__original.acquire_on_behalf_of(borrower)
|
|
||||||
|
|
||||||
def release(self) -> None:
|
|
||||||
return self.__original.release()
|
|
||||||
|
|
||||||
def release_on_behalf_of(self, borrower: object) -> None:
|
|
||||||
return self.__original.release_on_behalf_of(borrower)
|
|
||||||
|
|
||||||
def statistics(self) -> CapacityLimiterStatistics:
|
|
||||||
orig = self.__original.statistics()
|
|
||||||
return CapacityLimiterStatistics(
|
|
||||||
borrowed_tokens=orig.borrowed_tokens,
|
|
||||||
total_tokens=orig.total_tokens,
|
|
||||||
borrowers=orig.borrowers,
|
|
||||||
tasks_waiting=orig.tasks_waiting,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
_capacity_limiter_wrapper: RunVar = RunVar("_capacity_limiter_wrapper")
|
|
||||||
|
|
||||||
|
|
||||||
def current_default_thread_limiter() -> CapacityLimiter:
|
|
||||||
try:
|
|
||||||
return _capacity_limiter_wrapper.get()
|
|
||||||
except LookupError:
|
|
||||||
limiter = CapacityLimiter(
|
|
||||||
original=trio.to_thread.current_default_thread_limiter()
|
|
||||||
)
|
|
||||||
_capacity_limiter_wrapper.set(limiter)
|
|
||||||
return limiter
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Signal handling
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
class _SignalReceiver(DeprecatedAsyncContextManager["_SignalReceiver"]):
|
|
||||||
_iterator: AsyncIterator[int]
|
|
||||||
|
|
||||||
def __init__(self, signals: tuple[Signals, ...]):
|
|
||||||
self._signals = signals
|
|
||||||
|
|
||||||
def __enter__(self) -> _SignalReceiver:
|
|
||||||
self._cm = trio.open_signal_receiver(*self._signals)
|
|
||||||
self._iterator = self._cm.__enter__()
|
|
||||||
return self
|
|
||||||
|
|
||||||
def __exit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
return self._cm.__exit__(exc_type, exc_val, exc_tb)
|
|
||||||
|
|
||||||
def __aiter__(self) -> _SignalReceiver:
|
|
||||||
return self
|
|
||||||
|
|
||||||
async def __anext__(self) -> Signals:
|
|
||||||
signum = await self._iterator.__anext__()
|
|
||||||
return Signals(signum)
|
|
||||||
|
|
||||||
|
|
||||||
def open_signal_receiver(*signals: Signals) -> _SignalReceiver:
|
|
||||||
return _SignalReceiver(signals)
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Testing and debugging
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
def get_current_task() -> TaskInfo:
|
|
||||||
task = trio_lowlevel.current_task()
|
|
||||||
|
|
||||||
parent_id = None
|
|
||||||
if task.parent_nursery and task.parent_nursery.parent_task:
|
|
||||||
parent_id = id(task.parent_nursery.parent_task)
|
|
||||||
|
|
||||||
return TaskInfo(id(task), parent_id, task.name, task.coro)
|
|
||||||
|
|
||||||
|
|
||||||
def get_running_tasks() -> list[TaskInfo]:
|
|
||||||
root_task = trio_lowlevel.current_root_task()
|
|
||||||
task_infos = [TaskInfo(id(root_task), None, root_task.name, root_task.coro)]
|
|
||||||
nurseries = root_task.child_nurseries
|
|
||||||
while nurseries:
|
|
||||||
new_nurseries: list[trio.Nursery] = []
|
|
||||||
for nursery in nurseries:
|
|
||||||
for task in nursery.child_tasks:
|
|
||||||
task_infos.append(
|
|
||||||
TaskInfo(id(task), id(nursery.parent_task), task.name, task.coro)
|
|
||||||
)
|
|
||||||
new_nurseries.extend(task.child_nurseries)
|
|
||||||
|
|
||||||
nurseries = new_nurseries
|
|
||||||
|
|
||||||
return task_infos
|
|
||||||
|
|
||||||
|
|
||||||
def wait_all_tasks_blocked() -> Awaitable[None]:
|
|
||||||
import trio.testing
|
|
||||||
|
|
||||||
return trio.testing.wait_all_tasks_blocked()
|
|
||||||
|
|
||||||
|
|
||||||
class TestRunner(abc.TestRunner):
|
|
||||||
def __init__(self, **options: Any) -> None:
|
|
||||||
from collections import deque
|
|
||||||
from queue import Queue
|
|
||||||
|
|
||||||
self._call_queue: Queue[Callable[..., object]] = Queue()
|
|
||||||
self._result_queue: deque[Outcome] = deque()
|
|
||||||
self._stop_event: trio.Event | None = None
|
|
||||||
self._nursery: trio.Nursery | None = None
|
|
||||||
self._options = options
|
|
||||||
|
|
||||||
async def _trio_main(self) -> None:
|
|
||||||
self._stop_event = trio.Event()
|
|
||||||
async with trio.open_nursery() as self._nursery:
|
|
||||||
await self._stop_event.wait()
|
|
||||||
|
|
||||||
async def _call_func(
|
|
||||||
self, func: Callable[..., Awaitable[object]], args: tuple, kwargs: dict
|
|
||||||
) -> None:
|
|
||||||
try:
|
|
||||||
retval = await func(*args, **kwargs)
|
|
||||||
except BaseException as exc:
|
|
||||||
self._result_queue.append(Error(exc))
|
|
||||||
else:
|
|
||||||
self._result_queue.append(Value(retval))
|
|
||||||
|
|
||||||
def _main_task_finished(self, outcome: object) -> None:
|
|
||||||
self._nursery = None
|
|
||||||
|
|
||||||
def _get_nursery(self) -> trio.Nursery:
|
|
||||||
if self._nursery is None:
|
|
||||||
trio.lowlevel.start_guest_run(
|
|
||||||
self._trio_main,
|
|
||||||
run_sync_soon_threadsafe=self._call_queue.put,
|
|
||||||
done_callback=self._main_task_finished,
|
|
||||||
**self._options,
|
|
||||||
)
|
|
||||||
while self._nursery is None:
|
|
||||||
self._call_queue.get()()
|
|
||||||
|
|
||||||
return self._nursery
|
|
||||||
|
|
||||||
def _call(
|
|
||||||
self, func: Callable[..., Awaitable[T_Retval]], *args: object, **kwargs: object
|
|
||||||
) -> T_Retval:
|
|
||||||
self._get_nursery().start_soon(self._call_func, func, args, kwargs)
|
|
||||||
while not self._result_queue:
|
|
||||||
self._call_queue.get()()
|
|
||||||
|
|
||||||
outcome = self._result_queue.pop()
|
|
||||||
return outcome.unwrap()
|
|
||||||
|
|
||||||
def close(self) -> None:
|
|
||||||
if self._stop_event:
|
|
||||||
self._stop_event.set()
|
|
||||||
while self._nursery is not None:
|
|
||||||
self._call_queue.get()()
|
|
||||||
|
|
||||||
def run_asyncgen_fixture(
|
|
||||||
self,
|
|
||||||
fixture_func: Callable[..., AsyncGenerator[T_Retval, Any]],
|
|
||||||
kwargs: dict[str, Any],
|
|
||||||
) -> Iterable[T_Retval]:
|
|
||||||
async def fixture_runner(*, task_status: TaskStatus[T_Retval]) -> None:
|
|
||||||
agen = fixture_func(**kwargs)
|
|
||||||
retval = await agen.asend(None)
|
|
||||||
task_status.started(retval)
|
|
||||||
await teardown_event.wait()
|
|
||||||
try:
|
|
||||||
await agen.asend(None)
|
|
||||||
except StopAsyncIteration:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
await agen.aclose()
|
|
||||||
raise RuntimeError("Async generator fixture did not stop")
|
|
||||||
|
|
||||||
teardown_event = trio.Event()
|
|
||||||
fixture_value = self._call(lambda: self._get_nursery().start(fixture_runner))
|
|
||||||
yield fixture_value
|
|
||||||
teardown_event.set()
|
|
||||||
|
|
||||||
def run_fixture(
|
|
||||||
self,
|
|
||||||
fixture_func: Callable[..., Coroutine[Any, Any, T_Retval]],
|
|
||||||
kwargs: dict[str, Any],
|
|
||||||
) -> T_Retval:
|
|
||||||
return self._call(fixture_func, **kwargs)
|
|
||||||
|
|
||||||
def run_test(
|
|
||||||
self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any]
|
|
||||||
) -> None:
|
|
||||||
self._call(test_func, **kwargs)
|
|
||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,217 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from abc import ABCMeta, abstractmethod
|
|
||||||
from contextlib import AbstractContextManager
|
|
||||||
from types import TracebackType
|
|
||||||
from typing import (
|
|
||||||
TYPE_CHECKING,
|
|
||||||
Any,
|
|
||||||
AsyncContextManager,
|
|
||||||
Callable,
|
|
||||||
ContextManager,
|
|
||||||
Generator,
|
|
||||||
Generic,
|
|
||||||
Iterable,
|
|
||||||
List,
|
|
||||||
TypeVar,
|
|
||||||
Union,
|
|
||||||
overload,
|
|
||||||
)
|
|
||||||
from warnings import warn
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from ._testing import TaskInfo
|
|
||||||
else:
|
|
||||||
TaskInfo = object
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
AnyDeprecatedAwaitable = Union[
|
|
||||||
"DeprecatedAwaitable",
|
|
||||||
"DeprecatedAwaitableFloat",
|
|
||||||
"DeprecatedAwaitableList[T]",
|
|
||||||
TaskInfo,
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def maybe_async(__obj: TaskInfo) -> TaskInfo:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def maybe_async(__obj: DeprecatedAwaitableFloat) -> float:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def maybe_async(__obj: DeprecatedAwaitableList[T]) -> list[T]:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def maybe_async(__obj: DeprecatedAwaitable) -> None:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
async def maybe_async(
|
|
||||||
__obj: AnyDeprecatedAwaitable[T],
|
|
||||||
) -> TaskInfo | float | list[T] | None:
|
|
||||||
"""
|
|
||||||
Await on the given object if necessary.
|
|
||||||
|
|
||||||
This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and
|
|
||||||
methods were converted from coroutine functions into regular functions.
|
|
||||||
|
|
||||||
Do **not** try to use this for any other purpose!
|
|
||||||
|
|
||||||
:return: the result of awaiting on the object if coroutine, or the object itself otherwise
|
|
||||||
|
|
||||||
.. versionadded:: 2.2
|
|
||||||
|
|
||||||
"""
|
|
||||||
return __obj._unwrap()
|
|
||||||
|
|
||||||
|
|
||||||
class _ContextManagerWrapper:
|
|
||||||
def __init__(self, cm: ContextManager[T]):
|
|
||||||
self._cm = cm
|
|
||||||
|
|
||||||
async def __aenter__(self) -> T:
|
|
||||||
return self._cm.__enter__()
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
return self._cm.__exit__(exc_type, exc_val, exc_tb)
|
|
||||||
|
|
||||||
|
|
||||||
def maybe_async_cm(
|
|
||||||
cm: ContextManager[T] | AsyncContextManager[T],
|
|
||||||
) -> AsyncContextManager[T]:
|
|
||||||
"""
|
|
||||||
Wrap a regular context manager as an async one if necessary.
|
|
||||||
|
|
||||||
This function is intended to bridge the gap between AnyIO 2.x and 3.x where some functions and
|
|
||||||
methods were changed to return regular context managers instead of async ones.
|
|
||||||
|
|
||||||
:param cm: a regular or async context manager
|
|
||||||
:return: an async context manager
|
|
||||||
|
|
||||||
.. versionadded:: 2.2
|
|
||||||
|
|
||||||
"""
|
|
||||||
if not isinstance(cm, AbstractContextManager):
|
|
||||||
raise TypeError("Given object is not an context manager")
|
|
||||||
|
|
||||||
return _ContextManagerWrapper(cm)
|
|
||||||
|
|
||||||
|
|
||||||
def _warn_deprecation(
|
|
||||||
awaitable: AnyDeprecatedAwaitable[Any], stacklevel: int = 1
|
|
||||||
) -> None:
|
|
||||||
warn(
|
|
||||||
f'Awaiting on {awaitable._name}() is deprecated. Use "await '
|
|
||||||
f"anyio.maybe_async({awaitable._name}(...)) if you have to support both AnyIO 2.x "
|
|
||||||
f'and 3.x, or just remove the "await" if you are completely migrating to AnyIO 3+.',
|
|
||||||
DeprecationWarning,
|
|
||||||
stacklevel=stacklevel + 1,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class DeprecatedAwaitable:
|
|
||||||
def __init__(self, func: Callable[..., DeprecatedAwaitable]):
|
|
||||||
self._name = f"{func.__module__}.{func.__qualname__}"
|
|
||||||
|
|
||||||
def __await__(self) -> Generator[None, None, None]:
|
|
||||||
_warn_deprecation(self)
|
|
||||||
if False:
|
|
||||||
yield
|
|
||||||
|
|
||||||
def __reduce__(self) -> tuple[type[None], tuple[()]]:
|
|
||||||
return type(None), ()
|
|
||||||
|
|
||||||
def _unwrap(self) -> None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class DeprecatedAwaitableFloat(float):
|
|
||||||
def __new__(
|
|
||||||
cls, x: float, func: Callable[..., DeprecatedAwaitableFloat]
|
|
||||||
) -> DeprecatedAwaitableFloat:
|
|
||||||
return super().__new__(cls, x)
|
|
||||||
|
|
||||||
def __init__(self, x: float, func: Callable[..., DeprecatedAwaitableFloat]):
|
|
||||||
self._name = f"{func.__module__}.{func.__qualname__}"
|
|
||||||
|
|
||||||
def __await__(self) -> Generator[None, None, float]:
|
|
||||||
_warn_deprecation(self)
|
|
||||||
if False:
|
|
||||||
yield
|
|
||||||
|
|
||||||
return float(self)
|
|
||||||
|
|
||||||
def __reduce__(self) -> tuple[type[float], tuple[float]]:
|
|
||||||
return float, (float(self),)
|
|
||||||
|
|
||||||
def _unwrap(self) -> float:
|
|
||||||
return float(self)
|
|
||||||
|
|
||||||
|
|
||||||
class DeprecatedAwaitableList(List[T]):
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
iterable: Iterable[T] = (),
|
|
||||||
*,
|
|
||||||
func: Callable[..., DeprecatedAwaitableList[T]],
|
|
||||||
):
|
|
||||||
super().__init__(iterable)
|
|
||||||
self._name = f"{func.__module__}.{func.__qualname__}"
|
|
||||||
|
|
||||||
def __await__(self) -> Generator[None, None, list[T]]:
|
|
||||||
_warn_deprecation(self)
|
|
||||||
if False:
|
|
||||||
yield
|
|
||||||
|
|
||||||
return list(self)
|
|
||||||
|
|
||||||
def __reduce__(self) -> tuple[type[list[T]], tuple[list[T]]]:
|
|
||||||
return list, (list(self),)
|
|
||||||
|
|
||||||
def _unwrap(self) -> list[T]:
|
|
||||||
return list(self)
|
|
||||||
|
|
||||||
|
|
||||||
class DeprecatedAsyncContextManager(Generic[T], metaclass=ABCMeta):
|
|
||||||
@abstractmethod
|
|
||||||
def __enter__(self) -> T:
|
|
||||||
pass
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def __exit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
async def __aenter__(self) -> T:
|
|
||||||
warn(
|
|
||||||
f"Using {self.__class__.__name__} as an async context manager has been deprecated. "
|
|
||||||
f'Use "async with anyio.maybe_async_cm(yourcontextmanager) as foo:" if you have to '
|
|
||||||
f'support both AnyIO 2.x and 3.x, or just remove the "async" from "async with" if '
|
|
||||||
f"you are completely migrating to AnyIO 3+.",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return self.__enter__()
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
return self.__exit__(exc_type, exc_val, exc_tb)
|
|
||||||
@@ -1,153 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import math
|
|
||||||
import sys
|
|
||||||
import threading
|
|
||||||
from contextlib import contextmanager
|
|
||||||
from importlib import import_module
|
|
||||||
from typing import (
|
|
||||||
Any,
|
|
||||||
Awaitable,
|
|
||||||
Callable,
|
|
||||||
Generator,
|
|
||||||
TypeVar,
|
|
||||||
)
|
|
||||||
|
|
||||||
import sniffio
|
|
||||||
|
|
||||||
# This must be updated when new backends are introduced
|
|
||||||
from ._compat import DeprecatedAwaitableFloat
|
|
||||||
|
|
||||||
BACKENDS = "asyncio", "trio"
|
|
||||||
|
|
||||||
T_Retval = TypeVar("T_Retval")
|
|
||||||
threadlocals = threading.local()
|
|
||||||
|
|
||||||
|
|
||||||
def run(
|
|
||||||
func: Callable[..., Awaitable[T_Retval]],
|
|
||||||
*args: object,
|
|
||||||
backend: str = "asyncio",
|
|
||||||
backend_options: dict[str, Any] | None = None,
|
|
||||||
) -> T_Retval:
|
|
||||||
"""
|
|
||||||
Run the given coroutine function in an asynchronous event loop.
|
|
||||||
|
|
||||||
The current thread must not be already running an event loop.
|
|
||||||
|
|
||||||
:param func: a coroutine function
|
|
||||||
:param args: positional arguments to ``func``
|
|
||||||
:param backend: name of the asynchronous event loop implementation – currently either
|
|
||||||
``asyncio`` or ``trio``
|
|
||||||
:param backend_options: keyword arguments to call the backend ``run()`` implementation with
|
|
||||||
(documented :ref:`here <backend options>`)
|
|
||||||
:return: the return value of the coroutine function
|
|
||||||
:raises RuntimeError: if an asynchronous event loop is already running in this thread
|
|
||||||
:raises LookupError: if the named backend is not found
|
|
||||||
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
asynclib_name = sniffio.current_async_library()
|
|
||||||
except sniffio.AsyncLibraryNotFoundError:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
raise RuntimeError(f"Already running {asynclib_name} in this thread")
|
|
||||||
|
|
||||||
try:
|
|
||||||
asynclib = import_module(f"..._backends._{backend}", package=__name__)
|
|
||||||
except ImportError as exc:
|
|
||||||
raise LookupError(f"No such backend: {backend}") from exc
|
|
||||||
|
|
||||||
token = None
|
|
||||||
if sniffio.current_async_library_cvar.get(None) is None:
|
|
||||||
# Since we're in control of the event loop, we can cache the name of the async library
|
|
||||||
token = sniffio.current_async_library_cvar.set(backend)
|
|
||||||
|
|
||||||
try:
|
|
||||||
backend_options = backend_options or {}
|
|
||||||
return asynclib.run(func, *args, **backend_options)
|
|
||||||
finally:
|
|
||||||
if token:
|
|
||||||
sniffio.current_async_library_cvar.reset(token)
|
|
||||||
|
|
||||||
|
|
||||||
async def sleep(delay: float) -> None:
|
|
||||||
"""
|
|
||||||
Pause the current task for the specified duration.
|
|
||||||
|
|
||||||
:param delay: the duration, in seconds
|
|
||||||
|
|
||||||
"""
|
|
||||||
return await get_asynclib().sleep(delay)
|
|
||||||
|
|
||||||
|
|
||||||
async def sleep_forever() -> None:
|
|
||||||
"""
|
|
||||||
Pause the current task until it's cancelled.
|
|
||||||
|
|
||||||
This is a shortcut for ``sleep(math.inf)``.
|
|
||||||
|
|
||||||
.. versionadded:: 3.1
|
|
||||||
|
|
||||||
"""
|
|
||||||
await sleep(math.inf)
|
|
||||||
|
|
||||||
|
|
||||||
async def sleep_until(deadline: float) -> None:
|
|
||||||
"""
|
|
||||||
Pause the current task until the given time.
|
|
||||||
|
|
||||||
:param deadline: the absolute time to wake up at (according to the internal monotonic clock of
|
|
||||||
the event loop)
|
|
||||||
|
|
||||||
.. versionadded:: 3.1
|
|
||||||
|
|
||||||
"""
|
|
||||||
now = current_time()
|
|
||||||
await sleep(max(deadline - now, 0))
|
|
||||||
|
|
||||||
|
|
||||||
def current_time() -> DeprecatedAwaitableFloat:
|
|
||||||
"""
|
|
||||||
Return the current value of the event loop's internal clock.
|
|
||||||
|
|
||||||
:return: the clock value (seconds)
|
|
||||||
|
|
||||||
"""
|
|
||||||
return DeprecatedAwaitableFloat(get_asynclib().current_time(), current_time)
|
|
||||||
|
|
||||||
|
|
||||||
def get_all_backends() -> tuple[str, ...]:
|
|
||||||
"""Return a tuple of the names of all built-in backends."""
|
|
||||||
return BACKENDS
|
|
||||||
|
|
||||||
|
|
||||||
def get_cancelled_exc_class() -> type[BaseException]:
|
|
||||||
"""Return the current async library's cancellation exception class."""
|
|
||||||
return get_asynclib().CancelledError
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Private API
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
|
||||||
def claim_worker_thread(backend: str) -> Generator[Any, None, None]:
|
|
||||||
module = sys.modules["anyio._backends._" + backend]
|
|
||||||
threadlocals.current_async_module = module
|
|
||||||
try:
|
|
||||||
yield
|
|
||||||
finally:
|
|
||||||
del threadlocals.current_async_module
|
|
||||||
|
|
||||||
|
|
||||||
def get_asynclib(asynclib_name: str | None = None) -> Any:
|
|
||||||
if asynclib_name is None:
|
|
||||||
asynclib_name = sniffio.current_async_library()
|
|
||||||
|
|
||||||
modulename = "anyio._backends._" + asynclib_name
|
|
||||||
try:
|
|
||||||
return sys.modules[modulename]
|
|
||||||
except KeyError:
|
|
||||||
return import_module(modulename)
|
|
||||||
@@ -1,94 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from traceback import format_exception
|
|
||||||
|
|
||||||
|
|
||||||
class BrokenResourceError(Exception):
|
|
||||||
"""
|
|
||||||
Raised when trying to use a resource that has been rendered unusable due to external causes
|
|
||||||
(e.g. a send stream whose peer has disconnected).
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class BrokenWorkerProcess(Exception):
|
|
||||||
"""
|
|
||||||
Raised by :func:`run_sync_in_process` if the worker process terminates abruptly or otherwise
|
|
||||||
misbehaves.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class BusyResourceError(Exception):
|
|
||||||
"""Raised when two tasks are trying to read from or write to the same resource concurrently."""
|
|
||||||
|
|
||||||
def __init__(self, action: str):
|
|
||||||
super().__init__(f"Another task is already {action} this resource")
|
|
||||||
|
|
||||||
|
|
||||||
class ClosedResourceError(Exception):
|
|
||||||
"""Raised when trying to use a resource that has been closed."""
|
|
||||||
|
|
||||||
|
|
||||||
class DelimiterNotFound(Exception):
|
|
||||||
"""
|
|
||||||
Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the
|
|
||||||
maximum number of bytes has been read without the delimiter being found.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, max_bytes: int) -> None:
|
|
||||||
super().__init__(
|
|
||||||
f"The delimiter was not found among the first {max_bytes} bytes"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class EndOfStream(Exception):
|
|
||||||
"""Raised when trying to read from a stream that has been closed from the other end."""
|
|
||||||
|
|
||||||
|
|
||||||
class ExceptionGroup(BaseException):
|
|
||||||
"""
|
|
||||||
Raised when multiple exceptions have been raised in a task group.
|
|
||||||
|
|
||||||
:var ~typing.Sequence[BaseException] exceptions: the sequence of exceptions raised together
|
|
||||||
"""
|
|
||||||
|
|
||||||
SEPARATOR = "----------------------------\n"
|
|
||||||
|
|
||||||
exceptions: list[BaseException]
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
|
||||||
tracebacks = [
|
|
||||||
"".join(format_exception(type(exc), exc, exc.__traceback__))
|
|
||||||
for exc in self.exceptions
|
|
||||||
]
|
|
||||||
return (
|
|
||||||
f"{len(self.exceptions)} exceptions were raised in the task group:\n"
|
|
||||||
f"{self.SEPARATOR}{self.SEPARATOR.join(tracebacks)}"
|
|
||||||
)
|
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
exception_reprs = ", ".join(repr(exc) for exc in self.exceptions)
|
|
||||||
return f"<{self.__class__.__name__}: {exception_reprs}>"
|
|
||||||
|
|
||||||
|
|
||||||
class IncompleteRead(Exception):
|
|
||||||
"""
|
|
||||||
Raised during :meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_exactly` or
|
|
||||||
:meth:`~anyio.streams.buffered.BufferedByteReceiveStream.receive_until` if the
|
|
||||||
connection is closed before the requested amount of bytes has been read.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self) -> None:
|
|
||||||
super().__init__(
|
|
||||||
"The stream was closed before the read operation could be completed"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TypedAttributeLookupError(LookupError):
|
|
||||||
"""
|
|
||||||
Raised by :meth:`~anyio.TypedAttributeProvider.extra` when the given typed attribute is not
|
|
||||||
found and no default value has been given.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class WouldBlock(Exception):
|
|
||||||
"""Raised by ``X_nowait`` functions if ``X()`` would block."""
|
|
||||||
@@ -1,603 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import os
|
|
||||||
import pathlib
|
|
||||||
import sys
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from functools import partial
|
|
||||||
from os import PathLike
|
|
||||||
from typing import (
|
|
||||||
IO,
|
|
||||||
TYPE_CHECKING,
|
|
||||||
Any,
|
|
||||||
AnyStr,
|
|
||||||
AsyncIterator,
|
|
||||||
Callable,
|
|
||||||
Generic,
|
|
||||||
Iterable,
|
|
||||||
Iterator,
|
|
||||||
Sequence,
|
|
||||||
cast,
|
|
||||||
overload,
|
|
||||||
)
|
|
||||||
|
|
||||||
from .. import to_thread
|
|
||||||
from ..abc import AsyncResource
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 8):
|
|
||||||
from typing import Final
|
|
||||||
else:
|
|
||||||
from typing_extensions import Final
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from _typeshed import OpenBinaryMode, OpenTextMode, ReadableBuffer, WriteableBuffer
|
|
||||||
else:
|
|
||||||
ReadableBuffer = OpenBinaryMode = OpenTextMode = WriteableBuffer = object
|
|
||||||
|
|
||||||
|
|
||||||
class AsyncFile(AsyncResource, Generic[AnyStr]):
|
|
||||||
"""
|
|
||||||
An asynchronous file object.
|
|
||||||
|
|
||||||
This class wraps a standard file object and provides async friendly versions of the following
|
|
||||||
blocking methods (where available on the original file object):
|
|
||||||
|
|
||||||
* read
|
|
||||||
* read1
|
|
||||||
* readline
|
|
||||||
* readlines
|
|
||||||
* readinto
|
|
||||||
* readinto1
|
|
||||||
* write
|
|
||||||
* writelines
|
|
||||||
* truncate
|
|
||||||
* seek
|
|
||||||
* tell
|
|
||||||
* flush
|
|
||||||
|
|
||||||
All other methods are directly passed through.
|
|
||||||
|
|
||||||
This class supports the asynchronous context manager protocol which closes the underlying file
|
|
||||||
at the end of the context block.
|
|
||||||
|
|
||||||
This class also supports asynchronous iteration::
|
|
||||||
|
|
||||||
async with await open_file(...) as f:
|
|
||||||
async for line in f:
|
|
||||||
print(line)
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, fp: IO[AnyStr]) -> None:
|
|
||||||
self._fp: Any = fp
|
|
||||||
|
|
||||||
def __getattr__(self, name: str) -> object:
|
|
||||||
return getattr(self._fp, name)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def wrapped(self) -> IO[AnyStr]:
|
|
||||||
"""The wrapped file object."""
|
|
||||||
return self._fp
|
|
||||||
|
|
||||||
async def __aiter__(self) -> AsyncIterator[AnyStr]:
|
|
||||||
while True:
|
|
||||||
line = await self.readline()
|
|
||||||
if line:
|
|
||||||
yield line
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
async def aclose(self) -> None:
|
|
||||||
return await to_thread.run_sync(self._fp.close)
|
|
||||||
|
|
||||||
async def read(self, size: int = -1) -> AnyStr:
|
|
||||||
return await to_thread.run_sync(self._fp.read, size)
|
|
||||||
|
|
||||||
async def read1(self: AsyncFile[bytes], size: int = -1) -> bytes:
|
|
||||||
return await to_thread.run_sync(self._fp.read1, size)
|
|
||||||
|
|
||||||
async def readline(self) -> AnyStr:
|
|
||||||
return await to_thread.run_sync(self._fp.readline)
|
|
||||||
|
|
||||||
async def readlines(self) -> list[AnyStr]:
|
|
||||||
return await to_thread.run_sync(self._fp.readlines)
|
|
||||||
|
|
||||||
async def readinto(self: AsyncFile[bytes], b: WriteableBuffer) -> bytes:
|
|
||||||
return await to_thread.run_sync(self._fp.readinto, b)
|
|
||||||
|
|
||||||
async def readinto1(self: AsyncFile[bytes], b: WriteableBuffer) -> bytes:
|
|
||||||
return await to_thread.run_sync(self._fp.readinto1, b)
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def write(self: AsyncFile[bytes], b: ReadableBuffer) -> int:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def write(self: AsyncFile[str], b: str) -> int:
|
|
||||||
...
|
|
||||||
|
|
||||||
async def write(self, b: ReadableBuffer | str) -> int:
|
|
||||||
return await to_thread.run_sync(self._fp.write, b)
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def writelines(
|
|
||||||
self: AsyncFile[bytes], lines: Iterable[ReadableBuffer]
|
|
||||||
) -> None:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def writelines(self: AsyncFile[str], lines: Iterable[str]) -> None:
|
|
||||||
...
|
|
||||||
|
|
||||||
async def writelines(self, lines: Iterable[ReadableBuffer] | Iterable[str]) -> None:
|
|
||||||
return await to_thread.run_sync(self._fp.writelines, lines)
|
|
||||||
|
|
||||||
async def truncate(self, size: int | None = None) -> int:
|
|
||||||
return await to_thread.run_sync(self._fp.truncate, size)
|
|
||||||
|
|
||||||
async def seek(self, offset: int, whence: int | None = os.SEEK_SET) -> int:
|
|
||||||
return await to_thread.run_sync(self._fp.seek, offset, whence)
|
|
||||||
|
|
||||||
async def tell(self) -> int:
|
|
||||||
return await to_thread.run_sync(self._fp.tell)
|
|
||||||
|
|
||||||
async def flush(self) -> None:
|
|
||||||
return await to_thread.run_sync(self._fp.flush)
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def open_file(
|
|
||||||
file: str | PathLike[str] | int,
|
|
||||||
mode: OpenBinaryMode,
|
|
||||||
buffering: int = ...,
|
|
||||||
encoding: str | None = ...,
|
|
||||||
errors: str | None = ...,
|
|
||||||
newline: str | None = ...,
|
|
||||||
closefd: bool = ...,
|
|
||||||
opener: Callable[[str, int], int] | None = ...,
|
|
||||||
) -> AsyncFile[bytes]:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def open_file(
|
|
||||||
file: str | PathLike[str] | int,
|
|
||||||
mode: OpenTextMode = ...,
|
|
||||||
buffering: int = ...,
|
|
||||||
encoding: str | None = ...,
|
|
||||||
errors: str | None = ...,
|
|
||||||
newline: str | None = ...,
|
|
||||||
closefd: bool = ...,
|
|
||||||
opener: Callable[[str, int], int] | None = ...,
|
|
||||||
) -> AsyncFile[str]:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
async def open_file(
|
|
||||||
file: str | PathLike[str] | int,
|
|
||||||
mode: str = "r",
|
|
||||||
buffering: int = -1,
|
|
||||||
encoding: str | None = None,
|
|
||||||
errors: str | None = None,
|
|
||||||
newline: str | None = None,
|
|
||||||
closefd: bool = True,
|
|
||||||
opener: Callable[[str, int], int] | None = None,
|
|
||||||
) -> AsyncFile[Any]:
|
|
||||||
"""
|
|
||||||
Open a file asynchronously.
|
|
||||||
|
|
||||||
The arguments are exactly the same as for the builtin :func:`open`.
|
|
||||||
|
|
||||||
:return: an asynchronous file object
|
|
||||||
|
|
||||||
"""
|
|
||||||
fp = await to_thread.run_sync(
|
|
||||||
open, file, mode, buffering, encoding, errors, newline, closefd, opener
|
|
||||||
)
|
|
||||||
return AsyncFile(fp)
|
|
||||||
|
|
||||||
|
|
||||||
def wrap_file(file: IO[AnyStr]) -> AsyncFile[AnyStr]:
|
|
||||||
"""
|
|
||||||
Wrap an existing file as an asynchronous file.
|
|
||||||
|
|
||||||
:param file: an existing file-like object
|
|
||||||
:return: an asynchronous file object
|
|
||||||
|
|
||||||
"""
|
|
||||||
return AsyncFile(file)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(eq=False)
|
|
||||||
class _PathIterator(AsyncIterator["Path"]):
|
|
||||||
iterator: Iterator[PathLike[str]]
|
|
||||||
|
|
||||||
async def __anext__(self) -> Path:
|
|
||||||
nextval = await to_thread.run_sync(next, self.iterator, None, cancellable=True)
|
|
||||||
if nextval is None:
|
|
||||||
raise StopAsyncIteration from None
|
|
||||||
|
|
||||||
return Path(cast("PathLike[str]", nextval))
|
|
||||||
|
|
||||||
|
|
||||||
class Path:
|
|
||||||
"""
|
|
||||||
An asynchronous version of :class:`pathlib.Path`.
|
|
||||||
|
|
||||||
This class cannot be substituted for :class:`pathlib.Path` or :class:`pathlib.PurePath`, but
|
|
||||||
it is compatible with the :class:`os.PathLike` interface.
|
|
||||||
|
|
||||||
It implements the Python 3.10 version of :class:`pathlib.Path` interface, except for the
|
|
||||||
deprecated :meth:`~pathlib.Path.link_to` method.
|
|
||||||
|
|
||||||
Any methods that do disk I/O need to be awaited on. These methods are:
|
|
||||||
|
|
||||||
* :meth:`~pathlib.Path.absolute`
|
|
||||||
* :meth:`~pathlib.Path.chmod`
|
|
||||||
* :meth:`~pathlib.Path.cwd`
|
|
||||||
* :meth:`~pathlib.Path.exists`
|
|
||||||
* :meth:`~pathlib.Path.expanduser`
|
|
||||||
* :meth:`~pathlib.Path.group`
|
|
||||||
* :meth:`~pathlib.Path.hardlink_to`
|
|
||||||
* :meth:`~pathlib.Path.home`
|
|
||||||
* :meth:`~pathlib.Path.is_block_device`
|
|
||||||
* :meth:`~pathlib.Path.is_char_device`
|
|
||||||
* :meth:`~pathlib.Path.is_dir`
|
|
||||||
* :meth:`~pathlib.Path.is_fifo`
|
|
||||||
* :meth:`~pathlib.Path.is_file`
|
|
||||||
* :meth:`~pathlib.Path.is_mount`
|
|
||||||
* :meth:`~pathlib.Path.lchmod`
|
|
||||||
* :meth:`~pathlib.Path.lstat`
|
|
||||||
* :meth:`~pathlib.Path.mkdir`
|
|
||||||
* :meth:`~pathlib.Path.open`
|
|
||||||
* :meth:`~pathlib.Path.owner`
|
|
||||||
* :meth:`~pathlib.Path.read_bytes`
|
|
||||||
* :meth:`~pathlib.Path.read_text`
|
|
||||||
* :meth:`~pathlib.Path.readlink`
|
|
||||||
* :meth:`~pathlib.Path.rename`
|
|
||||||
* :meth:`~pathlib.Path.replace`
|
|
||||||
* :meth:`~pathlib.Path.rmdir`
|
|
||||||
* :meth:`~pathlib.Path.samefile`
|
|
||||||
* :meth:`~pathlib.Path.stat`
|
|
||||||
* :meth:`~pathlib.Path.touch`
|
|
||||||
* :meth:`~pathlib.Path.unlink`
|
|
||||||
* :meth:`~pathlib.Path.write_bytes`
|
|
||||||
* :meth:`~pathlib.Path.write_text`
|
|
||||||
|
|
||||||
Additionally, the following methods return an async iterator yielding :class:`~.Path` objects:
|
|
||||||
|
|
||||||
* :meth:`~pathlib.Path.glob`
|
|
||||||
* :meth:`~pathlib.Path.iterdir`
|
|
||||||
* :meth:`~pathlib.Path.rglob`
|
|
||||||
"""
|
|
||||||
|
|
||||||
__slots__ = "_path", "__weakref__"
|
|
||||||
|
|
||||||
__weakref__: Any
|
|
||||||
|
|
||||||
def __init__(self, *args: str | PathLike[str]) -> None:
|
|
||||||
self._path: Final[pathlib.Path] = pathlib.Path(*args)
|
|
||||||
|
|
||||||
def __fspath__(self) -> str:
|
|
||||||
return self._path.__fspath__()
|
|
||||||
|
|
||||||
def __str__(self) -> str:
|
|
||||||
return self._path.__str__()
|
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
return f"{self.__class__.__name__}({self.as_posix()!r})"
|
|
||||||
|
|
||||||
def __bytes__(self) -> bytes:
|
|
||||||
return self._path.__bytes__()
|
|
||||||
|
|
||||||
def __hash__(self) -> int:
|
|
||||||
return self._path.__hash__()
|
|
||||||
|
|
||||||
def __eq__(self, other: object) -> bool:
|
|
||||||
target = other._path if isinstance(other, Path) else other
|
|
||||||
return self._path.__eq__(target)
|
|
||||||
|
|
||||||
def __lt__(self, other: Path) -> bool:
|
|
||||||
target = other._path if isinstance(other, Path) else other
|
|
||||||
return self._path.__lt__(target)
|
|
||||||
|
|
||||||
def __le__(self, other: Path) -> bool:
|
|
||||||
target = other._path if isinstance(other, Path) else other
|
|
||||||
return self._path.__le__(target)
|
|
||||||
|
|
||||||
def __gt__(self, other: Path) -> bool:
|
|
||||||
target = other._path if isinstance(other, Path) else other
|
|
||||||
return self._path.__gt__(target)
|
|
||||||
|
|
||||||
def __ge__(self, other: Path) -> bool:
|
|
||||||
target = other._path if isinstance(other, Path) else other
|
|
||||||
return self._path.__ge__(target)
|
|
||||||
|
|
||||||
def __truediv__(self, other: Any) -> Path:
|
|
||||||
return Path(self._path / other)
|
|
||||||
|
|
||||||
def __rtruediv__(self, other: Any) -> Path:
|
|
||||||
return Path(other) / self
|
|
||||||
|
|
||||||
@property
|
|
||||||
def parts(self) -> tuple[str, ...]:
|
|
||||||
return self._path.parts
|
|
||||||
|
|
||||||
@property
|
|
||||||
def drive(self) -> str:
|
|
||||||
return self._path.drive
|
|
||||||
|
|
||||||
@property
|
|
||||||
def root(self) -> str:
|
|
||||||
return self._path.root
|
|
||||||
|
|
||||||
@property
|
|
||||||
def anchor(self) -> str:
|
|
||||||
return self._path.anchor
|
|
||||||
|
|
||||||
@property
|
|
||||||
def parents(self) -> Sequence[Path]:
|
|
||||||
return tuple(Path(p) for p in self._path.parents)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def parent(self) -> Path:
|
|
||||||
return Path(self._path.parent)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return self._path.name
|
|
||||||
|
|
||||||
@property
|
|
||||||
def suffix(self) -> str:
|
|
||||||
return self._path.suffix
|
|
||||||
|
|
||||||
@property
|
|
||||||
def suffixes(self) -> list[str]:
|
|
||||||
return self._path.suffixes
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stem(self) -> str:
|
|
||||||
return self._path.stem
|
|
||||||
|
|
||||||
async def absolute(self) -> Path:
|
|
||||||
path = await to_thread.run_sync(self._path.absolute)
|
|
||||||
return Path(path)
|
|
||||||
|
|
||||||
def as_posix(self) -> str:
|
|
||||||
return self._path.as_posix()
|
|
||||||
|
|
||||||
def as_uri(self) -> str:
|
|
||||||
return self._path.as_uri()
|
|
||||||
|
|
||||||
def match(self, path_pattern: str) -> bool:
|
|
||||||
return self._path.match(path_pattern)
|
|
||||||
|
|
||||||
def is_relative_to(self, *other: str | PathLike[str]) -> bool:
|
|
||||||
try:
|
|
||||||
self.relative_to(*other)
|
|
||||||
return True
|
|
||||||
except ValueError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
async def chmod(self, mode: int, *, follow_symlinks: bool = True) -> None:
|
|
||||||
func = partial(os.chmod, follow_symlinks=follow_symlinks)
|
|
||||||
return await to_thread.run_sync(func, self._path, mode)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
async def cwd(cls) -> Path:
|
|
||||||
path = await to_thread.run_sync(pathlib.Path.cwd)
|
|
||||||
return cls(path)
|
|
||||||
|
|
||||||
async def exists(self) -> bool:
|
|
||||||
return await to_thread.run_sync(self._path.exists, cancellable=True)
|
|
||||||
|
|
||||||
async def expanduser(self) -> Path:
|
|
||||||
return Path(await to_thread.run_sync(self._path.expanduser, cancellable=True))
|
|
||||||
|
|
||||||
def glob(self, pattern: str) -> AsyncIterator[Path]:
|
|
||||||
gen = self._path.glob(pattern)
|
|
||||||
return _PathIterator(gen)
|
|
||||||
|
|
||||||
async def group(self) -> str:
|
|
||||||
return await to_thread.run_sync(self._path.group, cancellable=True)
|
|
||||||
|
|
||||||
async def hardlink_to(self, target: str | pathlib.Path | Path) -> None:
|
|
||||||
if isinstance(target, Path):
|
|
||||||
target = target._path
|
|
||||||
|
|
||||||
await to_thread.run_sync(os.link, target, self)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
async def home(cls) -> Path:
|
|
||||||
home_path = await to_thread.run_sync(pathlib.Path.home)
|
|
||||||
return cls(home_path)
|
|
||||||
|
|
||||||
def is_absolute(self) -> bool:
|
|
||||||
return self._path.is_absolute()
|
|
||||||
|
|
||||||
async def is_block_device(self) -> bool:
|
|
||||||
return await to_thread.run_sync(self._path.is_block_device, cancellable=True)
|
|
||||||
|
|
||||||
async def is_char_device(self) -> bool:
|
|
||||||
return await to_thread.run_sync(self._path.is_char_device, cancellable=True)
|
|
||||||
|
|
||||||
async def is_dir(self) -> bool:
|
|
||||||
return await to_thread.run_sync(self._path.is_dir, cancellable=True)
|
|
||||||
|
|
||||||
async def is_fifo(self) -> bool:
|
|
||||||
return await to_thread.run_sync(self._path.is_fifo, cancellable=True)
|
|
||||||
|
|
||||||
async def is_file(self) -> bool:
|
|
||||||
return await to_thread.run_sync(self._path.is_file, cancellable=True)
|
|
||||||
|
|
||||||
async def is_mount(self) -> bool:
|
|
||||||
return await to_thread.run_sync(os.path.ismount, self._path, cancellable=True)
|
|
||||||
|
|
||||||
def is_reserved(self) -> bool:
|
|
||||||
return self._path.is_reserved()
|
|
||||||
|
|
||||||
async def is_socket(self) -> bool:
|
|
||||||
return await to_thread.run_sync(self._path.is_socket, cancellable=True)
|
|
||||||
|
|
||||||
async def is_symlink(self) -> bool:
|
|
||||||
return await to_thread.run_sync(self._path.is_symlink, cancellable=True)
|
|
||||||
|
|
||||||
def iterdir(self) -> AsyncIterator[Path]:
|
|
||||||
gen = self._path.iterdir()
|
|
||||||
return _PathIterator(gen)
|
|
||||||
|
|
||||||
def joinpath(self, *args: str | PathLike[str]) -> Path:
|
|
||||||
return Path(self._path.joinpath(*args))
|
|
||||||
|
|
||||||
async def lchmod(self, mode: int) -> None:
|
|
||||||
await to_thread.run_sync(self._path.lchmod, mode)
|
|
||||||
|
|
||||||
async def lstat(self) -> os.stat_result:
|
|
||||||
return await to_thread.run_sync(self._path.lstat, cancellable=True)
|
|
||||||
|
|
||||||
async def mkdir(
|
|
||||||
self, mode: int = 0o777, parents: bool = False, exist_ok: bool = False
|
|
||||||
) -> None:
|
|
||||||
await to_thread.run_sync(self._path.mkdir, mode, parents, exist_ok)
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def open(
|
|
||||||
self,
|
|
||||||
mode: OpenBinaryMode,
|
|
||||||
buffering: int = ...,
|
|
||||||
encoding: str | None = ...,
|
|
||||||
errors: str | None = ...,
|
|
||||||
newline: str | None = ...,
|
|
||||||
) -> AsyncFile[bytes]:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
async def open(
|
|
||||||
self,
|
|
||||||
mode: OpenTextMode = ...,
|
|
||||||
buffering: int = ...,
|
|
||||||
encoding: str | None = ...,
|
|
||||||
errors: str | None = ...,
|
|
||||||
newline: str | None = ...,
|
|
||||||
) -> AsyncFile[str]:
|
|
||||||
...
|
|
||||||
|
|
||||||
async def open(
|
|
||||||
self,
|
|
||||||
mode: str = "r",
|
|
||||||
buffering: int = -1,
|
|
||||||
encoding: str | None = None,
|
|
||||||
errors: str | None = None,
|
|
||||||
newline: str | None = None,
|
|
||||||
) -> AsyncFile[Any]:
|
|
||||||
fp = await to_thread.run_sync(
|
|
||||||
self._path.open, mode, buffering, encoding, errors, newline
|
|
||||||
)
|
|
||||||
return AsyncFile(fp)
|
|
||||||
|
|
||||||
async def owner(self) -> str:
|
|
||||||
return await to_thread.run_sync(self._path.owner, cancellable=True)
|
|
||||||
|
|
||||||
async def read_bytes(self) -> bytes:
|
|
||||||
return await to_thread.run_sync(self._path.read_bytes)
|
|
||||||
|
|
||||||
async def read_text(
|
|
||||||
self, encoding: str | None = None, errors: str | None = None
|
|
||||||
) -> str:
|
|
||||||
return await to_thread.run_sync(self._path.read_text, encoding, errors)
|
|
||||||
|
|
||||||
def relative_to(self, *other: str | PathLike[str]) -> Path:
|
|
||||||
return Path(self._path.relative_to(*other))
|
|
||||||
|
|
||||||
async def readlink(self) -> Path:
|
|
||||||
target = await to_thread.run_sync(os.readlink, self._path)
|
|
||||||
return Path(cast(str, target))
|
|
||||||
|
|
||||||
async def rename(self, target: str | pathlib.PurePath | Path) -> Path:
|
|
||||||
if isinstance(target, Path):
|
|
||||||
target = target._path
|
|
||||||
|
|
||||||
await to_thread.run_sync(self._path.rename, target)
|
|
||||||
return Path(target)
|
|
||||||
|
|
||||||
async def replace(self, target: str | pathlib.PurePath | Path) -> Path:
|
|
||||||
if isinstance(target, Path):
|
|
||||||
target = target._path
|
|
||||||
|
|
||||||
await to_thread.run_sync(self._path.replace, target)
|
|
||||||
return Path(target)
|
|
||||||
|
|
||||||
async def resolve(self, strict: bool = False) -> Path:
|
|
||||||
func = partial(self._path.resolve, strict=strict)
|
|
||||||
return Path(await to_thread.run_sync(func, cancellable=True))
|
|
||||||
|
|
||||||
def rglob(self, pattern: str) -> AsyncIterator[Path]:
|
|
||||||
gen = self._path.rglob(pattern)
|
|
||||||
return _PathIterator(gen)
|
|
||||||
|
|
||||||
async def rmdir(self) -> None:
|
|
||||||
await to_thread.run_sync(self._path.rmdir)
|
|
||||||
|
|
||||||
async def samefile(
|
|
||||||
self, other_path: str | bytes | int | pathlib.Path | Path
|
|
||||||
) -> bool:
|
|
||||||
if isinstance(other_path, Path):
|
|
||||||
other_path = other_path._path
|
|
||||||
|
|
||||||
return await to_thread.run_sync(
|
|
||||||
self._path.samefile, other_path, cancellable=True
|
|
||||||
)
|
|
||||||
|
|
||||||
async def stat(self, *, follow_symlinks: bool = True) -> os.stat_result:
|
|
||||||
func = partial(os.stat, follow_symlinks=follow_symlinks)
|
|
||||||
return await to_thread.run_sync(func, self._path, cancellable=True)
|
|
||||||
|
|
||||||
async def symlink_to(
|
|
||||||
self,
|
|
||||||
target: str | pathlib.Path | Path,
|
|
||||||
target_is_directory: bool = False,
|
|
||||||
) -> None:
|
|
||||||
if isinstance(target, Path):
|
|
||||||
target = target._path
|
|
||||||
|
|
||||||
await to_thread.run_sync(self._path.symlink_to, target, target_is_directory)
|
|
||||||
|
|
||||||
async def touch(self, mode: int = 0o666, exist_ok: bool = True) -> None:
|
|
||||||
await to_thread.run_sync(self._path.touch, mode, exist_ok)
|
|
||||||
|
|
||||||
async def unlink(self, missing_ok: bool = False) -> None:
|
|
||||||
try:
|
|
||||||
await to_thread.run_sync(self._path.unlink)
|
|
||||||
except FileNotFoundError:
|
|
||||||
if not missing_ok:
|
|
||||||
raise
|
|
||||||
|
|
||||||
def with_name(self, name: str) -> Path:
|
|
||||||
return Path(self._path.with_name(name))
|
|
||||||
|
|
||||||
def with_stem(self, stem: str) -> Path:
|
|
||||||
return Path(self._path.with_name(stem + self._path.suffix))
|
|
||||||
|
|
||||||
def with_suffix(self, suffix: str) -> Path:
|
|
||||||
return Path(self._path.with_suffix(suffix))
|
|
||||||
|
|
||||||
async def write_bytes(self, data: bytes) -> int:
|
|
||||||
return await to_thread.run_sync(self._path.write_bytes, data)
|
|
||||||
|
|
||||||
async def write_text(
|
|
||||||
self,
|
|
||||||
data: str,
|
|
||||||
encoding: str | None = None,
|
|
||||||
errors: str | None = None,
|
|
||||||
newline: str | None = None,
|
|
||||||
) -> int:
|
|
||||||
# Path.write_text() does not support the "newline" parameter before Python 3.10
|
|
||||||
def sync_write_text() -> int:
|
|
||||||
with self._path.open(
|
|
||||||
"w", encoding=encoding, errors=errors, newline=newline
|
|
||||||
) as fp:
|
|
||||||
return fp.write(data)
|
|
||||||
|
|
||||||
return await to_thread.run_sync(sync_write_text)
|
|
||||||
|
|
||||||
|
|
||||||
PathLike.register(Path)
|
|
||||||
@@ -1,18 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from ..abc import AsyncResource
|
|
||||||
from ._tasks import CancelScope
|
|
||||||
|
|
||||||
|
|
||||||
async def aclose_forcefully(resource: AsyncResource) -> None:
|
|
||||||
"""
|
|
||||||
Close an asynchronous resource in a cancelled scope.
|
|
||||||
|
|
||||||
Doing this closes the resource without waiting on anything.
|
|
||||||
|
|
||||||
:param resource: the resource to close
|
|
||||||
|
|
||||||
"""
|
|
||||||
with CancelScope() as scope:
|
|
||||||
scope.cancel()
|
|
||||||
await resource.aclose()
|
|
||||||
@@ -1,26 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from typing import AsyncIterator
|
|
||||||
|
|
||||||
from ._compat import DeprecatedAsyncContextManager
|
|
||||||
from ._eventloop import get_asynclib
|
|
||||||
|
|
||||||
|
|
||||||
def open_signal_receiver(
|
|
||||||
*signals: int,
|
|
||||||
) -> DeprecatedAsyncContextManager[AsyncIterator[int]]:
|
|
||||||
"""
|
|
||||||
Start receiving operating system signals.
|
|
||||||
|
|
||||||
:param signals: signals to receive (e.g. ``signal.SIGINT``)
|
|
||||||
:return: an asynchronous context manager for an asynchronous iterator which yields signal
|
|
||||||
numbers
|
|
||||||
|
|
||||||
.. warning:: Windows does not support signals natively so it is best to avoid relying on this
|
|
||||||
in cross-platform applications.
|
|
||||||
|
|
||||||
.. warning:: On asyncio, this permanently replaces any previous signal handler for the given
|
|
||||||
signals, as set via :meth:`~asyncio.loop.add_signal_handler`.
|
|
||||||
|
|
||||||
"""
|
|
||||||
return get_asynclib().open_signal_receiver(*signals)
|
|
||||||
@@ -1,607 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import socket
|
|
||||||
import ssl
|
|
||||||
import sys
|
|
||||||
from ipaddress import IPv6Address, ip_address
|
|
||||||
from os import PathLike, chmod
|
|
||||||
from pathlib import Path
|
|
||||||
from socket import AddressFamily, SocketKind
|
|
||||||
from typing import Awaitable, List, Tuple, cast, overload
|
|
||||||
|
|
||||||
from .. import to_thread
|
|
||||||
from ..abc import (
|
|
||||||
ConnectedUDPSocket,
|
|
||||||
IPAddressType,
|
|
||||||
IPSockAddrType,
|
|
||||||
SocketListener,
|
|
||||||
SocketStream,
|
|
||||||
UDPSocket,
|
|
||||||
UNIXSocketStream,
|
|
||||||
)
|
|
||||||
from ..streams.stapled import MultiListener
|
|
||||||
from ..streams.tls import TLSStream
|
|
||||||
from ._eventloop import get_asynclib
|
|
||||||
from ._resources import aclose_forcefully
|
|
||||||
from ._synchronization import Event
|
|
||||||
from ._tasks import create_task_group, move_on_after
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 8):
|
|
||||||
from typing import Literal
|
|
||||||
else:
|
|
||||||
from typing_extensions import Literal
|
|
||||||
|
|
||||||
IPPROTO_IPV6 = getattr(socket, "IPPROTO_IPV6", 41) # https://bugs.python.org/issue29515
|
|
||||||
|
|
||||||
GetAddrInfoReturnType = List[
|
|
||||||
Tuple[AddressFamily, SocketKind, int, str, Tuple[str, int]]
|
|
||||||
]
|
|
||||||
AnyIPAddressFamily = Literal[
|
|
||||||
AddressFamily.AF_UNSPEC, AddressFamily.AF_INET, AddressFamily.AF_INET6
|
|
||||||
]
|
|
||||||
IPAddressFamily = Literal[AddressFamily.AF_INET, AddressFamily.AF_INET6]
|
|
||||||
|
|
||||||
|
|
||||||
# tls_hostname given
|
|
||||||
@overload
|
|
||||||
async def connect_tcp(
|
|
||||||
remote_host: IPAddressType,
|
|
||||||
remote_port: int,
|
|
||||||
*,
|
|
||||||
local_host: IPAddressType | None = ...,
|
|
||||||
ssl_context: ssl.SSLContext | None = ...,
|
|
||||||
tls_standard_compatible: bool = ...,
|
|
||||||
tls_hostname: str,
|
|
||||||
happy_eyeballs_delay: float = ...,
|
|
||||||
) -> TLSStream:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
# ssl_context given
|
|
||||||
@overload
|
|
||||||
async def connect_tcp(
|
|
||||||
remote_host: IPAddressType,
|
|
||||||
remote_port: int,
|
|
||||||
*,
|
|
||||||
local_host: IPAddressType | None = ...,
|
|
||||||
ssl_context: ssl.SSLContext,
|
|
||||||
tls_standard_compatible: bool = ...,
|
|
||||||
tls_hostname: str | None = ...,
|
|
||||||
happy_eyeballs_delay: float = ...,
|
|
||||||
) -> TLSStream:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
# tls=True
|
|
||||||
@overload
|
|
||||||
async def connect_tcp(
|
|
||||||
remote_host: IPAddressType,
|
|
||||||
remote_port: int,
|
|
||||||
*,
|
|
||||||
local_host: IPAddressType | None = ...,
|
|
||||||
tls: Literal[True],
|
|
||||||
ssl_context: ssl.SSLContext | None = ...,
|
|
||||||
tls_standard_compatible: bool = ...,
|
|
||||||
tls_hostname: str | None = ...,
|
|
||||||
happy_eyeballs_delay: float = ...,
|
|
||||||
) -> TLSStream:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
# tls=False
|
|
||||||
@overload
|
|
||||||
async def connect_tcp(
|
|
||||||
remote_host: IPAddressType,
|
|
||||||
remote_port: int,
|
|
||||||
*,
|
|
||||||
local_host: IPAddressType | None = ...,
|
|
||||||
tls: Literal[False],
|
|
||||||
ssl_context: ssl.SSLContext | None = ...,
|
|
||||||
tls_standard_compatible: bool = ...,
|
|
||||||
tls_hostname: str | None = ...,
|
|
||||||
happy_eyeballs_delay: float = ...,
|
|
||||||
) -> SocketStream:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
# No TLS arguments
|
|
||||||
@overload
|
|
||||||
async def connect_tcp(
|
|
||||||
remote_host: IPAddressType,
|
|
||||||
remote_port: int,
|
|
||||||
*,
|
|
||||||
local_host: IPAddressType | None = ...,
|
|
||||||
happy_eyeballs_delay: float = ...,
|
|
||||||
) -> SocketStream:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
async def connect_tcp(
|
|
||||||
remote_host: IPAddressType,
|
|
||||||
remote_port: int,
|
|
||||||
*,
|
|
||||||
local_host: IPAddressType | None = None,
|
|
||||||
tls: bool = False,
|
|
||||||
ssl_context: ssl.SSLContext | None = None,
|
|
||||||
tls_standard_compatible: bool = True,
|
|
||||||
tls_hostname: str | None = None,
|
|
||||||
happy_eyeballs_delay: float = 0.25,
|
|
||||||
) -> SocketStream | TLSStream:
|
|
||||||
"""
|
|
||||||
Connect to a host using the TCP protocol.
|
|
||||||
|
|
||||||
This function implements the stateless version of the Happy Eyeballs algorithm (RFC
|
|
||||||
6555). If ``remote_host`` is a host name that resolves to multiple IP addresses,
|
|
||||||
each one is tried until one connection attempt succeeds. If the first attempt does
|
|
||||||
not connected within 250 milliseconds, a second attempt is started using the next
|
|
||||||
address in the list, and so on. On IPv6 enabled systems, an IPv6 address (if
|
|
||||||
available) is tried first.
|
|
||||||
|
|
||||||
When the connection has been established, a TLS handshake will be done if either
|
|
||||||
``ssl_context`` or ``tls_hostname`` is not ``None``, or if ``tls`` is ``True``.
|
|
||||||
|
|
||||||
:param remote_host: the IP address or host name to connect to
|
|
||||||
:param remote_port: port on the target host to connect to
|
|
||||||
:param local_host: the interface address or name to bind the socket to before connecting
|
|
||||||
:param tls: ``True`` to do a TLS handshake with the connected stream and return a
|
|
||||||
:class:`~anyio.streams.tls.TLSStream` instead
|
|
||||||
:param ssl_context: the SSL context object to use (if omitted, a default context is created)
|
|
||||||
:param tls_standard_compatible: If ``True``, performs the TLS shutdown handshake before closing
|
|
||||||
the stream and requires that the server does this as well. Otherwise,
|
|
||||||
:exc:`~ssl.SSLEOFError` may be raised during reads from the stream.
|
|
||||||
Some protocols, such as HTTP, require this option to be ``False``.
|
|
||||||
See :meth:`~ssl.SSLContext.wrap_socket` for details.
|
|
||||||
:param tls_hostname: host name to check the server certificate against (defaults to the value
|
|
||||||
of ``remote_host``)
|
|
||||||
:param happy_eyeballs_delay: delay (in seconds) before starting the next connection attempt
|
|
||||||
:return: a socket stream object if no TLS handshake was done, otherwise a TLS stream
|
|
||||||
:raises OSError: if the connection attempt fails
|
|
||||||
|
|
||||||
"""
|
|
||||||
# Placed here due to https://github.com/python/mypy/issues/7057
|
|
||||||
connected_stream: SocketStream | None = None
|
|
||||||
|
|
||||||
async def try_connect(remote_host: str, event: Event) -> None:
|
|
||||||
nonlocal connected_stream
|
|
||||||
try:
|
|
||||||
stream = await asynclib.connect_tcp(remote_host, remote_port, local_address)
|
|
||||||
except OSError as exc:
|
|
||||||
oserrors.append(exc)
|
|
||||||
return
|
|
||||||
else:
|
|
||||||
if connected_stream is None:
|
|
||||||
connected_stream = stream
|
|
||||||
tg.cancel_scope.cancel()
|
|
||||||
else:
|
|
||||||
await stream.aclose()
|
|
||||||
finally:
|
|
||||||
event.set()
|
|
||||||
|
|
||||||
asynclib = get_asynclib()
|
|
||||||
local_address: IPSockAddrType | None = None
|
|
||||||
family = socket.AF_UNSPEC
|
|
||||||
if local_host:
|
|
||||||
gai_res = await getaddrinfo(str(local_host), None)
|
|
||||||
family, *_, local_address = gai_res[0]
|
|
||||||
|
|
||||||
target_host = str(remote_host)
|
|
||||||
try:
|
|
||||||
addr_obj = ip_address(remote_host)
|
|
||||||
except ValueError:
|
|
||||||
# getaddrinfo() will raise an exception if name resolution fails
|
|
||||||
gai_res = await getaddrinfo(
|
|
||||||
target_host, remote_port, family=family, type=socket.SOCK_STREAM
|
|
||||||
)
|
|
||||||
|
|
||||||
# Organize the list so that the first address is an IPv6 address (if available) and the
|
|
||||||
# second one is an IPv4 addresses. The rest can be in whatever order.
|
|
||||||
v6_found = v4_found = False
|
|
||||||
target_addrs: list[tuple[socket.AddressFamily, str]] = []
|
|
||||||
for af, *rest, sa in gai_res:
|
|
||||||
if af == socket.AF_INET6 and not v6_found:
|
|
||||||
v6_found = True
|
|
||||||
target_addrs.insert(0, (af, sa[0]))
|
|
||||||
elif af == socket.AF_INET and not v4_found and v6_found:
|
|
||||||
v4_found = True
|
|
||||||
target_addrs.insert(1, (af, sa[0]))
|
|
||||||
else:
|
|
||||||
target_addrs.append((af, sa[0]))
|
|
||||||
else:
|
|
||||||
if isinstance(addr_obj, IPv6Address):
|
|
||||||
target_addrs = [(socket.AF_INET6, addr_obj.compressed)]
|
|
||||||
else:
|
|
||||||
target_addrs = [(socket.AF_INET, addr_obj.compressed)]
|
|
||||||
|
|
||||||
oserrors: list[OSError] = []
|
|
||||||
async with create_task_group() as tg:
|
|
||||||
for i, (af, addr) in enumerate(target_addrs):
|
|
||||||
event = Event()
|
|
||||||
tg.start_soon(try_connect, addr, event)
|
|
||||||
with move_on_after(happy_eyeballs_delay):
|
|
||||||
await event.wait()
|
|
||||||
|
|
||||||
if connected_stream is None:
|
|
||||||
cause = oserrors[0] if len(oserrors) == 1 else asynclib.ExceptionGroup(oserrors)
|
|
||||||
raise OSError("All connection attempts failed") from cause
|
|
||||||
|
|
||||||
if tls or tls_hostname or ssl_context:
|
|
||||||
try:
|
|
||||||
return await TLSStream.wrap(
|
|
||||||
connected_stream,
|
|
||||||
server_side=False,
|
|
||||||
hostname=tls_hostname or str(remote_host),
|
|
||||||
ssl_context=ssl_context,
|
|
||||||
standard_compatible=tls_standard_compatible,
|
|
||||||
)
|
|
||||||
except BaseException:
|
|
||||||
await aclose_forcefully(connected_stream)
|
|
||||||
raise
|
|
||||||
|
|
||||||
return connected_stream
|
|
||||||
|
|
||||||
|
|
||||||
async def connect_unix(path: str | PathLike[str]) -> UNIXSocketStream:
|
|
||||||
"""
|
|
||||||
Connect to the given UNIX socket.
|
|
||||||
|
|
||||||
Not available on Windows.
|
|
||||||
|
|
||||||
:param path: path to the socket
|
|
||||||
:return: a socket stream object
|
|
||||||
|
|
||||||
"""
|
|
||||||
path = str(Path(path))
|
|
||||||
return await get_asynclib().connect_unix(path)
|
|
||||||
|
|
||||||
|
|
||||||
async def create_tcp_listener(
|
|
||||||
*,
|
|
||||||
local_host: IPAddressType | None = None,
|
|
||||||
local_port: int = 0,
|
|
||||||
family: AnyIPAddressFamily = socket.AddressFamily.AF_UNSPEC,
|
|
||||||
backlog: int = 65536,
|
|
||||||
reuse_port: bool = False,
|
|
||||||
) -> MultiListener[SocketStream]:
|
|
||||||
"""
|
|
||||||
Create a TCP socket listener.
|
|
||||||
|
|
||||||
:param local_port: port number to listen on
|
|
||||||
:param local_host: IP address of the interface to listen on. If omitted, listen on
|
|
||||||
all IPv4 and IPv6 interfaces. To listen on all interfaces on a specific address
|
|
||||||
family, use ``0.0.0.0`` for IPv4 or ``::`` for IPv6.
|
|
||||||
:param family: address family (used if ``local_host`` was omitted)
|
|
||||||
:param backlog: maximum number of queued incoming connections (up to a maximum of
|
|
||||||
2**16, or 65536)
|
|
||||||
:param reuse_port: ``True`` to allow multiple sockets to bind to the same
|
|
||||||
address/port (not supported on Windows)
|
|
||||||
:return: a list of listener objects
|
|
||||||
|
|
||||||
"""
|
|
||||||
asynclib = get_asynclib()
|
|
||||||
backlog = min(backlog, 65536)
|
|
||||||
local_host = str(local_host) if local_host is not None else None
|
|
||||||
gai_res = await getaddrinfo(
|
|
||||||
local_host, # type: ignore[arg-type]
|
|
||||||
local_port,
|
|
||||||
family=family,
|
|
||||||
type=socket.SocketKind.SOCK_STREAM if sys.platform == "win32" else 0,
|
|
||||||
flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG,
|
|
||||||
)
|
|
||||||
listeners: list[SocketListener] = []
|
|
||||||
try:
|
|
||||||
# The set() is here to work around a glibc bug:
|
|
||||||
# https://sourceware.org/bugzilla/show_bug.cgi?id=14969
|
|
||||||
sockaddr: tuple[str, int] | tuple[str, int, int, int]
|
|
||||||
for fam, kind, *_, sockaddr in sorted(set(gai_res)):
|
|
||||||
# Workaround for an uvloop bug where we don't get the correct scope ID for
|
|
||||||
# IPv6 link-local addresses when passing type=socket.SOCK_STREAM to
|
|
||||||
# getaddrinfo(): https://github.com/MagicStack/uvloop/issues/539
|
|
||||||
if sys.platform != "win32" and kind is not SocketKind.SOCK_STREAM:
|
|
||||||
continue
|
|
||||||
|
|
||||||
raw_socket = socket.socket(fam)
|
|
||||||
raw_socket.setblocking(False)
|
|
||||||
|
|
||||||
# For Windows, enable exclusive address use. For others, enable address reuse.
|
|
||||||
if sys.platform == "win32":
|
|
||||||
raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)
|
|
||||||
else:
|
|
||||||
raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
|
|
||||||
|
|
||||||
if reuse_port:
|
|
||||||
raw_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1)
|
|
||||||
|
|
||||||
# If only IPv6 was requested, disable dual stack operation
|
|
||||||
if fam == socket.AF_INET6:
|
|
||||||
raw_socket.setsockopt(IPPROTO_IPV6, socket.IPV6_V6ONLY, 1)
|
|
||||||
|
|
||||||
# Workaround for #554
|
|
||||||
if "%" in sockaddr[0]:
|
|
||||||
addr, scope_id = sockaddr[0].split("%", 1)
|
|
||||||
sockaddr = (addr, sockaddr[1], 0, int(scope_id))
|
|
||||||
|
|
||||||
raw_socket.bind(sockaddr)
|
|
||||||
raw_socket.listen(backlog)
|
|
||||||
listener = asynclib.TCPSocketListener(raw_socket)
|
|
||||||
listeners.append(listener)
|
|
||||||
except BaseException:
|
|
||||||
for listener in listeners:
|
|
||||||
await listener.aclose()
|
|
||||||
|
|
||||||
raise
|
|
||||||
|
|
||||||
return MultiListener(listeners)
|
|
||||||
|
|
||||||
|
|
||||||
async def create_unix_listener(
|
|
||||||
path: str | PathLike[str],
|
|
||||||
*,
|
|
||||||
mode: int | None = None,
|
|
||||||
backlog: int = 65536,
|
|
||||||
) -> SocketListener:
|
|
||||||
"""
|
|
||||||
Create a UNIX socket listener.
|
|
||||||
|
|
||||||
Not available on Windows.
|
|
||||||
|
|
||||||
:param path: path of the socket
|
|
||||||
:param mode: permissions to set on the socket
|
|
||||||
:param backlog: maximum number of queued incoming connections (up to a maximum of 2**16, or
|
|
||||||
65536)
|
|
||||||
:return: a listener object
|
|
||||||
|
|
||||||
.. versionchanged:: 3.0
|
|
||||||
If a socket already exists on the file system in the given path, it will be removed first.
|
|
||||||
|
|
||||||
"""
|
|
||||||
path_str = str(path)
|
|
||||||
path = Path(path)
|
|
||||||
if path.is_socket():
|
|
||||||
path.unlink()
|
|
||||||
|
|
||||||
backlog = min(backlog, 65536)
|
|
||||||
raw_socket = socket.socket(socket.AF_UNIX)
|
|
||||||
raw_socket.setblocking(False)
|
|
||||||
try:
|
|
||||||
await to_thread.run_sync(raw_socket.bind, path_str, cancellable=True)
|
|
||||||
if mode is not None:
|
|
||||||
await to_thread.run_sync(chmod, path_str, mode, cancellable=True)
|
|
||||||
|
|
||||||
raw_socket.listen(backlog)
|
|
||||||
return get_asynclib().UNIXSocketListener(raw_socket)
|
|
||||||
except BaseException:
|
|
||||||
raw_socket.close()
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
async def create_udp_socket(
|
|
||||||
family: AnyIPAddressFamily = AddressFamily.AF_UNSPEC,
|
|
||||||
*,
|
|
||||||
local_host: IPAddressType | None = None,
|
|
||||||
local_port: int = 0,
|
|
||||||
reuse_port: bool = False,
|
|
||||||
) -> UDPSocket:
|
|
||||||
"""
|
|
||||||
Create a UDP socket.
|
|
||||||
|
|
||||||
If ``local_port`` has been given, the socket will be bound to this port on the local
|
|
||||||
machine, making this socket suitable for providing UDP based services.
|
|
||||||
|
|
||||||
:param family: address family (``AF_INET`` or ``AF_INET6``) – automatically determined from
|
|
||||||
``local_host`` if omitted
|
|
||||||
:param local_host: IP address or host name of the local interface to bind to
|
|
||||||
:param local_port: local port to bind to
|
|
||||||
:param reuse_port: ``True`` to allow multiple sockets to bind to the same address/port
|
|
||||||
(not supported on Windows)
|
|
||||||
:return: a UDP socket
|
|
||||||
|
|
||||||
"""
|
|
||||||
if family is AddressFamily.AF_UNSPEC and not local_host:
|
|
||||||
raise ValueError('Either "family" or "local_host" must be given')
|
|
||||||
|
|
||||||
if local_host:
|
|
||||||
gai_res = await getaddrinfo(
|
|
||||||
str(local_host),
|
|
||||||
local_port,
|
|
||||||
family=family,
|
|
||||||
type=socket.SOCK_DGRAM,
|
|
||||||
flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG,
|
|
||||||
)
|
|
||||||
family = cast(AnyIPAddressFamily, gai_res[0][0])
|
|
||||||
local_address = gai_res[0][-1]
|
|
||||||
elif family is AddressFamily.AF_INET6:
|
|
||||||
local_address = ("::", 0)
|
|
||||||
else:
|
|
||||||
local_address = ("0.0.0.0", 0)
|
|
||||||
|
|
||||||
return await get_asynclib().create_udp_socket(
|
|
||||||
family, local_address, None, reuse_port
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
async def create_connected_udp_socket(
|
|
||||||
remote_host: IPAddressType,
|
|
||||||
remote_port: int,
|
|
||||||
*,
|
|
||||||
family: AnyIPAddressFamily = AddressFamily.AF_UNSPEC,
|
|
||||||
local_host: IPAddressType | None = None,
|
|
||||||
local_port: int = 0,
|
|
||||||
reuse_port: bool = False,
|
|
||||||
) -> ConnectedUDPSocket:
|
|
||||||
"""
|
|
||||||
Create a connected UDP socket.
|
|
||||||
|
|
||||||
Connected UDP sockets can only communicate with the specified remote host/port, and any packets
|
|
||||||
sent from other sources are dropped.
|
|
||||||
|
|
||||||
:param remote_host: remote host to set as the default target
|
|
||||||
:param remote_port: port on the remote host to set as the default target
|
|
||||||
:param family: address family (``AF_INET`` or ``AF_INET6``) – automatically determined from
|
|
||||||
``local_host`` or ``remote_host`` if omitted
|
|
||||||
:param local_host: IP address or host name of the local interface to bind to
|
|
||||||
:param local_port: local port to bind to
|
|
||||||
:param reuse_port: ``True`` to allow multiple sockets to bind to the same address/port
|
|
||||||
(not supported on Windows)
|
|
||||||
:return: a connected UDP socket
|
|
||||||
|
|
||||||
"""
|
|
||||||
local_address = None
|
|
||||||
if local_host:
|
|
||||||
gai_res = await getaddrinfo(
|
|
||||||
str(local_host),
|
|
||||||
local_port,
|
|
||||||
family=family,
|
|
||||||
type=socket.SOCK_DGRAM,
|
|
||||||
flags=socket.AI_PASSIVE | socket.AI_ADDRCONFIG,
|
|
||||||
)
|
|
||||||
family = cast(AnyIPAddressFamily, gai_res[0][0])
|
|
||||||
local_address = gai_res[0][-1]
|
|
||||||
|
|
||||||
gai_res = await getaddrinfo(
|
|
||||||
str(remote_host), remote_port, family=family, type=socket.SOCK_DGRAM
|
|
||||||
)
|
|
||||||
family = cast(AnyIPAddressFamily, gai_res[0][0])
|
|
||||||
remote_address = gai_res[0][-1]
|
|
||||||
|
|
||||||
return await get_asynclib().create_udp_socket(
|
|
||||||
family, local_address, remote_address, reuse_port
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
async def getaddrinfo(
|
|
||||||
host: bytearray | bytes | str,
|
|
||||||
port: str | int | None,
|
|
||||||
*,
|
|
||||||
family: int | AddressFamily = 0,
|
|
||||||
type: int | SocketKind = 0,
|
|
||||||
proto: int = 0,
|
|
||||||
flags: int = 0,
|
|
||||||
) -> GetAddrInfoReturnType:
|
|
||||||
"""
|
|
||||||
Look up a numeric IP address given a host name.
|
|
||||||
|
|
||||||
Internationalized domain names are translated according to the (non-transitional) IDNA 2008
|
|
||||||
standard.
|
|
||||||
|
|
||||||
.. note:: 4-tuple IPv6 socket addresses are automatically converted to 2-tuples of
|
|
||||||
(host, port), unlike what :func:`socket.getaddrinfo` does.
|
|
||||||
|
|
||||||
:param host: host name
|
|
||||||
:param port: port number
|
|
||||||
:param family: socket family (`'AF_INET``, ...)
|
|
||||||
:param type: socket type (``SOCK_STREAM``, ...)
|
|
||||||
:param proto: protocol number
|
|
||||||
:param flags: flags to pass to upstream ``getaddrinfo()``
|
|
||||||
:return: list of tuples containing (family, type, proto, canonname, sockaddr)
|
|
||||||
|
|
||||||
.. seealso:: :func:`socket.getaddrinfo`
|
|
||||||
|
|
||||||
"""
|
|
||||||
# Handle unicode hostnames
|
|
||||||
if isinstance(host, str):
|
|
||||||
try:
|
|
||||||
encoded_host = host.encode("ascii")
|
|
||||||
except UnicodeEncodeError:
|
|
||||||
import idna
|
|
||||||
|
|
||||||
encoded_host = idna.encode(host, uts46=True)
|
|
||||||
else:
|
|
||||||
encoded_host = host
|
|
||||||
|
|
||||||
gai_res = await get_asynclib().getaddrinfo(
|
|
||||||
encoded_host, port, family=family, type=type, proto=proto, flags=flags
|
|
||||||
)
|
|
||||||
return [
|
|
||||||
(family, type, proto, canonname, convert_ipv6_sockaddr(sockaddr))
|
|
||||||
for family, type, proto, canonname, sockaddr in gai_res
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def getnameinfo(sockaddr: IPSockAddrType, flags: int = 0) -> Awaitable[tuple[str, str]]:
|
|
||||||
"""
|
|
||||||
Look up the host name of an IP address.
|
|
||||||
|
|
||||||
:param sockaddr: socket address (e.g. (ipaddress, port) for IPv4)
|
|
||||||
:param flags: flags to pass to upstream ``getnameinfo()``
|
|
||||||
:return: a tuple of (host name, service name)
|
|
||||||
|
|
||||||
.. seealso:: :func:`socket.getnameinfo`
|
|
||||||
|
|
||||||
"""
|
|
||||||
return get_asynclib().getnameinfo(sockaddr, flags)
|
|
||||||
|
|
||||||
|
|
||||||
def wait_socket_readable(sock: socket.socket) -> Awaitable[None]:
|
|
||||||
"""
|
|
||||||
Wait until the given socket has data to be read.
|
|
||||||
|
|
||||||
This does **NOT** work on Windows when using the asyncio backend with a proactor event loop
|
|
||||||
(default on py3.8+).
|
|
||||||
|
|
||||||
.. warning:: Only use this on raw sockets that have not been wrapped by any higher level
|
|
||||||
constructs like socket streams!
|
|
||||||
|
|
||||||
:param sock: a socket object
|
|
||||||
:raises ~anyio.ClosedResourceError: if the socket was closed while waiting for the
|
|
||||||
socket to become readable
|
|
||||||
:raises ~anyio.BusyResourceError: if another task is already waiting for the socket
|
|
||||||
to become readable
|
|
||||||
|
|
||||||
"""
|
|
||||||
return get_asynclib().wait_socket_readable(sock)
|
|
||||||
|
|
||||||
|
|
||||||
def wait_socket_writable(sock: socket.socket) -> Awaitable[None]:
|
|
||||||
"""
|
|
||||||
Wait until the given socket can be written to.
|
|
||||||
|
|
||||||
This does **NOT** work on Windows when using the asyncio backend with a proactor event loop
|
|
||||||
(default on py3.8+).
|
|
||||||
|
|
||||||
.. warning:: Only use this on raw sockets that have not been wrapped by any higher level
|
|
||||||
constructs like socket streams!
|
|
||||||
|
|
||||||
:param sock: a socket object
|
|
||||||
:raises ~anyio.ClosedResourceError: if the socket was closed while waiting for the
|
|
||||||
socket to become writable
|
|
||||||
:raises ~anyio.BusyResourceError: if another task is already waiting for the socket
|
|
||||||
to become writable
|
|
||||||
|
|
||||||
"""
|
|
||||||
return get_asynclib().wait_socket_writable(sock)
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# Private API
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
def convert_ipv6_sockaddr(
|
|
||||||
sockaddr: tuple[str, int, int, int] | tuple[str, int]
|
|
||||||
) -> tuple[str, int]:
|
|
||||||
"""
|
|
||||||
Convert a 4-tuple IPv6 socket address to a 2-tuple (address, port) format.
|
|
||||||
|
|
||||||
If the scope ID is nonzero, it is added to the address, separated with ``%``.
|
|
||||||
Otherwise the flow id and scope id are simply cut off from the tuple.
|
|
||||||
Any other kinds of socket addresses are returned as-is.
|
|
||||||
|
|
||||||
:param sockaddr: the result of :meth:`~socket.socket.getsockname`
|
|
||||||
:return: the converted socket address
|
|
||||||
|
|
||||||
"""
|
|
||||||
# This is more complicated than it should be because of MyPy
|
|
||||||
if isinstance(sockaddr, tuple) and len(sockaddr) == 4:
|
|
||||||
host, port, flowinfo, scope_id = cast(Tuple[str, int, int, int], sockaddr)
|
|
||||||
if scope_id:
|
|
||||||
# PyPy (as of v7.3.11) leaves the interface name in the result, so
|
|
||||||
# we discard it and only get the scope ID from the end
|
|
||||||
# (https://foss.heptapod.net/pypy/pypy/-/issues/3938)
|
|
||||||
host = host.split("%")[0]
|
|
||||||
|
|
||||||
# Add scope_id to the address
|
|
||||||
return f"{host}%{scope_id}", port
|
|
||||||
else:
|
|
||||||
return host, port
|
|
||||||
else:
|
|
||||||
return cast(Tuple[str, int], sockaddr)
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import math
|
|
||||||
from typing import Any, TypeVar, overload
|
|
||||||
|
|
||||||
from ..streams.memory import (
|
|
||||||
MemoryObjectReceiveStream,
|
|
||||||
MemoryObjectSendStream,
|
|
||||||
MemoryObjectStreamState,
|
|
||||||
)
|
|
||||||
|
|
||||||
T_Item = TypeVar("T_Item")
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def create_memory_object_stream(
|
|
||||||
max_buffer_size: float = ...,
|
|
||||||
) -> tuple[MemoryObjectSendStream[Any], MemoryObjectReceiveStream[Any]]:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def create_memory_object_stream(
|
|
||||||
max_buffer_size: float = ..., item_type: type[T_Item] = ...
|
|
||||||
) -> tuple[MemoryObjectSendStream[T_Item], MemoryObjectReceiveStream[T_Item]]:
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
def create_memory_object_stream(
|
|
||||||
max_buffer_size: float = 0, item_type: type[T_Item] | None = None
|
|
||||||
) -> tuple[MemoryObjectSendStream[Any], MemoryObjectReceiveStream[Any]]:
|
|
||||||
"""
|
|
||||||
Create a memory object stream.
|
|
||||||
|
|
||||||
:param max_buffer_size: number of items held in the buffer until ``send()`` starts blocking
|
|
||||||
:param item_type: type of item, for marking the streams with the right generic type for
|
|
||||||
static typing (not used at run time)
|
|
||||||
:return: a tuple of (send stream, receive stream)
|
|
||||||
|
|
||||||
"""
|
|
||||||
if max_buffer_size != math.inf and not isinstance(max_buffer_size, int):
|
|
||||||
raise ValueError("max_buffer_size must be either an integer or math.inf")
|
|
||||||
if max_buffer_size < 0:
|
|
||||||
raise ValueError("max_buffer_size cannot be negative")
|
|
||||||
|
|
||||||
state: MemoryObjectStreamState = MemoryObjectStreamState(max_buffer_size)
|
|
||||||
return MemoryObjectSendStream(state), MemoryObjectReceiveStream(state)
|
|
||||||
@@ -1,135 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from io import BytesIO
|
|
||||||
from os import PathLike
|
|
||||||
from subprocess import DEVNULL, PIPE, CalledProcessError, CompletedProcess
|
|
||||||
from typing import (
|
|
||||||
IO,
|
|
||||||
Any,
|
|
||||||
AsyncIterable,
|
|
||||||
Mapping,
|
|
||||||
Sequence,
|
|
||||||
cast,
|
|
||||||
)
|
|
||||||
|
|
||||||
from ..abc import Process
|
|
||||||
from ._eventloop import get_asynclib
|
|
||||||
from ._tasks import create_task_group
|
|
||||||
|
|
||||||
|
|
||||||
async def run_process(
|
|
||||||
command: str | bytes | Sequence[str | bytes],
|
|
||||||
*,
|
|
||||||
input: bytes | None = None,
|
|
||||||
stdout: int | IO[Any] | None = PIPE,
|
|
||||||
stderr: int | IO[Any] | None = PIPE,
|
|
||||||
check: bool = True,
|
|
||||||
cwd: str | bytes | PathLike[str] | None = None,
|
|
||||||
env: Mapping[str, str] | None = None,
|
|
||||||
start_new_session: bool = False,
|
|
||||||
) -> CompletedProcess[bytes]:
|
|
||||||
"""
|
|
||||||
Run an external command in a subprocess and wait until it completes.
|
|
||||||
|
|
||||||
.. seealso:: :func:`subprocess.run`
|
|
||||||
|
|
||||||
:param command: either a string to pass to the shell, or an iterable of strings containing the
|
|
||||||
executable name or path and its arguments
|
|
||||||
:param input: bytes passed to the standard input of the subprocess
|
|
||||||
:param stdout: either :data:`subprocess.PIPE` or :data:`subprocess.DEVNULL`
|
|
||||||
:param stderr: one of :data:`subprocess.PIPE`, :data:`subprocess.DEVNULL` or
|
|
||||||
:data:`subprocess.STDOUT`
|
|
||||||
:param check: if ``True``, raise :exc:`~subprocess.CalledProcessError` if the process
|
|
||||||
terminates with a return code other than 0
|
|
||||||
:param cwd: If not ``None``, change the working directory to this before running the command
|
|
||||||
:param env: if not ``None``, this mapping replaces the inherited environment variables from the
|
|
||||||
parent process
|
|
||||||
:param start_new_session: if ``true`` the setsid() system call will be made in the child
|
|
||||||
process prior to the execution of the subprocess. (POSIX only)
|
|
||||||
:return: an object representing the completed process
|
|
||||||
:raises ~subprocess.CalledProcessError: if ``check`` is ``True`` and the process exits with a
|
|
||||||
nonzero return code
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
async def drain_stream(stream: AsyncIterable[bytes], index: int) -> None:
|
|
||||||
buffer = BytesIO()
|
|
||||||
async for chunk in stream:
|
|
||||||
buffer.write(chunk)
|
|
||||||
|
|
||||||
stream_contents[index] = buffer.getvalue()
|
|
||||||
|
|
||||||
async with await open_process(
|
|
||||||
command,
|
|
||||||
stdin=PIPE if input else DEVNULL,
|
|
||||||
stdout=stdout,
|
|
||||||
stderr=stderr,
|
|
||||||
cwd=cwd,
|
|
||||||
env=env,
|
|
||||||
start_new_session=start_new_session,
|
|
||||||
) as process:
|
|
||||||
stream_contents: list[bytes | None] = [None, None]
|
|
||||||
try:
|
|
||||||
async with create_task_group() as tg:
|
|
||||||
if process.stdout:
|
|
||||||
tg.start_soon(drain_stream, process.stdout, 0)
|
|
||||||
if process.stderr:
|
|
||||||
tg.start_soon(drain_stream, process.stderr, 1)
|
|
||||||
if process.stdin and input:
|
|
||||||
await process.stdin.send(input)
|
|
||||||
await process.stdin.aclose()
|
|
||||||
|
|
||||||
await process.wait()
|
|
||||||
except BaseException:
|
|
||||||
process.kill()
|
|
||||||
raise
|
|
||||||
|
|
||||||
output, errors = stream_contents
|
|
||||||
if check and process.returncode != 0:
|
|
||||||
raise CalledProcessError(cast(int, process.returncode), command, output, errors)
|
|
||||||
|
|
||||||
return CompletedProcess(command, cast(int, process.returncode), output, errors)
|
|
||||||
|
|
||||||
|
|
||||||
async def open_process(
|
|
||||||
command: str | bytes | Sequence[str | bytes],
|
|
||||||
*,
|
|
||||||
stdin: int | IO[Any] | None = PIPE,
|
|
||||||
stdout: int | IO[Any] | None = PIPE,
|
|
||||||
stderr: int | IO[Any] | None = PIPE,
|
|
||||||
cwd: str | bytes | PathLike[str] | None = None,
|
|
||||||
env: Mapping[str, str] | None = None,
|
|
||||||
start_new_session: bool = False,
|
|
||||||
) -> Process:
|
|
||||||
"""
|
|
||||||
Start an external command in a subprocess.
|
|
||||||
|
|
||||||
.. seealso:: :class:`subprocess.Popen`
|
|
||||||
|
|
||||||
:param command: either a string to pass to the shell, or an iterable of strings containing the
|
|
||||||
executable name or path and its arguments
|
|
||||||
:param stdin: one of :data:`subprocess.PIPE`, :data:`subprocess.DEVNULL`, a
|
|
||||||
file-like object, or ``None``
|
|
||||||
:param stdout: one of :data:`subprocess.PIPE`, :data:`subprocess.DEVNULL`,
|
|
||||||
a file-like object, or ``None``
|
|
||||||
:param stderr: one of :data:`subprocess.PIPE`, :data:`subprocess.DEVNULL`,
|
|
||||||
:data:`subprocess.STDOUT`, a file-like object, or ``None``
|
|
||||||
:param cwd: If not ``None``, the working directory is changed before executing
|
|
||||||
:param env: If env is not ``None``, it must be a mapping that defines the environment
|
|
||||||
variables for the new process
|
|
||||||
:param start_new_session: if ``true`` the setsid() system call will be made in the child
|
|
||||||
process prior to the execution of the subprocess. (POSIX only)
|
|
||||||
:return: an asynchronous process object
|
|
||||||
|
|
||||||
"""
|
|
||||||
shell = isinstance(command, str)
|
|
||||||
return await get_asynclib().open_process(
|
|
||||||
command,
|
|
||||||
shell=shell,
|
|
||||||
stdin=stdin,
|
|
||||||
stdout=stdout,
|
|
||||||
stderr=stderr,
|
|
||||||
cwd=cwd,
|
|
||||||
env=env,
|
|
||||||
start_new_session=start_new_session,
|
|
||||||
)
|
|
||||||
@@ -1,596 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from collections import deque
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from types import TracebackType
|
|
||||||
from warnings import warn
|
|
||||||
|
|
||||||
from ..lowlevel import cancel_shielded_checkpoint, checkpoint, checkpoint_if_cancelled
|
|
||||||
from ._compat import DeprecatedAwaitable
|
|
||||||
from ._eventloop import get_asynclib
|
|
||||||
from ._exceptions import BusyResourceError, WouldBlock
|
|
||||||
from ._tasks import CancelScope
|
|
||||||
from ._testing import TaskInfo, get_current_task
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
|
||||||
class EventStatistics:
|
|
||||||
"""
|
|
||||||
:ivar int tasks_waiting: number of tasks waiting on :meth:`~.Event.wait`
|
|
||||||
"""
|
|
||||||
|
|
||||||
tasks_waiting: int
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
|
||||||
class CapacityLimiterStatistics:
|
|
||||||
"""
|
|
||||||
:ivar int borrowed_tokens: number of tokens currently borrowed by tasks
|
|
||||||
:ivar float total_tokens: total number of available tokens
|
|
||||||
:ivar tuple borrowers: tasks or other objects currently holding tokens borrowed from this
|
|
||||||
limiter
|
|
||||||
:ivar int tasks_waiting: number of tasks waiting on :meth:`~.CapacityLimiter.acquire` or
|
|
||||||
:meth:`~.CapacityLimiter.acquire_on_behalf_of`
|
|
||||||
"""
|
|
||||||
|
|
||||||
borrowed_tokens: int
|
|
||||||
total_tokens: float
|
|
||||||
borrowers: tuple[object, ...]
|
|
||||||
tasks_waiting: int
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
|
||||||
class LockStatistics:
|
|
||||||
"""
|
|
||||||
:ivar bool locked: flag indicating if this lock is locked or not
|
|
||||||
:ivar ~anyio.TaskInfo owner: task currently holding the lock (or ``None`` if the lock is not
|
|
||||||
held by any task)
|
|
||||||
:ivar int tasks_waiting: number of tasks waiting on :meth:`~.Lock.acquire`
|
|
||||||
"""
|
|
||||||
|
|
||||||
locked: bool
|
|
||||||
owner: TaskInfo | None
|
|
||||||
tasks_waiting: int
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
|
||||||
class ConditionStatistics:
|
|
||||||
"""
|
|
||||||
:ivar int tasks_waiting: number of tasks blocked on :meth:`~.Condition.wait`
|
|
||||||
:ivar ~anyio.LockStatistics lock_statistics: statistics of the underlying :class:`~.Lock`
|
|
||||||
"""
|
|
||||||
|
|
||||||
tasks_waiting: int
|
|
||||||
lock_statistics: LockStatistics
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
|
||||||
class SemaphoreStatistics:
|
|
||||||
"""
|
|
||||||
:ivar int tasks_waiting: number of tasks waiting on :meth:`~.Semaphore.acquire`
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
tasks_waiting: int
|
|
||||||
|
|
||||||
|
|
||||||
class Event:
|
|
||||||
def __new__(cls) -> Event:
|
|
||||||
return get_asynclib().Event()
|
|
||||||
|
|
||||||
def set(self) -> DeprecatedAwaitable:
|
|
||||||
"""Set the flag, notifying all listeners."""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def is_set(self) -> bool:
|
|
||||||
"""Return ``True`` if the flag is set, ``False`` if not."""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
async def wait(self) -> None:
|
|
||||||
"""
|
|
||||||
Wait until the flag has been set.
|
|
||||||
|
|
||||||
If the flag has already been set when this method is called, it returns immediately.
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def statistics(self) -> EventStatistics:
|
|
||||||
"""Return statistics about the current state of this event."""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
|
|
||||||
class Lock:
|
|
||||||
_owner_task: TaskInfo | None = None
|
|
||||||
|
|
||||||
def __init__(self) -> None:
|
|
||||||
self._waiters: deque[tuple[TaskInfo, Event]] = deque()
|
|
||||||
|
|
||||||
async def __aenter__(self) -> None:
|
|
||||||
await self.acquire()
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> None:
|
|
||||||
self.release()
|
|
||||||
|
|
||||||
async def acquire(self) -> None:
|
|
||||||
"""Acquire the lock."""
|
|
||||||
await checkpoint_if_cancelled()
|
|
||||||
try:
|
|
||||||
self.acquire_nowait()
|
|
||||||
except WouldBlock:
|
|
||||||
task = get_current_task()
|
|
||||||
event = Event()
|
|
||||||
token = task, event
|
|
||||||
self._waiters.append(token)
|
|
||||||
try:
|
|
||||||
await event.wait()
|
|
||||||
except BaseException:
|
|
||||||
if not event.is_set():
|
|
||||||
self._waiters.remove(token)
|
|
||||||
elif self._owner_task == task:
|
|
||||||
self.release()
|
|
||||||
|
|
||||||
raise
|
|
||||||
|
|
||||||
assert self._owner_task == task
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
await cancel_shielded_checkpoint()
|
|
||||||
except BaseException:
|
|
||||||
self.release()
|
|
||||||
raise
|
|
||||||
|
|
||||||
def acquire_nowait(self) -> None:
|
|
||||||
"""
|
|
||||||
Acquire the lock, without blocking.
|
|
||||||
|
|
||||||
:raises ~anyio.WouldBlock: if the operation would block
|
|
||||||
|
|
||||||
"""
|
|
||||||
task = get_current_task()
|
|
||||||
if self._owner_task == task:
|
|
||||||
raise RuntimeError("Attempted to acquire an already held Lock")
|
|
||||||
|
|
||||||
if self._owner_task is not None:
|
|
||||||
raise WouldBlock
|
|
||||||
|
|
||||||
self._owner_task = task
|
|
||||||
|
|
||||||
def release(self) -> DeprecatedAwaitable:
|
|
||||||
"""Release the lock."""
|
|
||||||
if self._owner_task != get_current_task():
|
|
||||||
raise RuntimeError("The current task is not holding this lock")
|
|
||||||
|
|
||||||
if self._waiters:
|
|
||||||
self._owner_task, event = self._waiters.popleft()
|
|
||||||
event.set()
|
|
||||||
else:
|
|
||||||
del self._owner_task
|
|
||||||
|
|
||||||
return DeprecatedAwaitable(self.release)
|
|
||||||
|
|
||||||
def locked(self) -> bool:
|
|
||||||
"""Return True if the lock is currently held."""
|
|
||||||
return self._owner_task is not None
|
|
||||||
|
|
||||||
def statistics(self) -> LockStatistics:
|
|
||||||
"""
|
|
||||||
Return statistics about the current state of this lock.
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
"""
|
|
||||||
return LockStatistics(self.locked(), self._owner_task, len(self._waiters))
|
|
||||||
|
|
||||||
|
|
||||||
class Condition:
|
|
||||||
_owner_task: TaskInfo | None = None
|
|
||||||
|
|
||||||
def __init__(self, lock: Lock | None = None):
|
|
||||||
self._lock = lock or Lock()
|
|
||||||
self._waiters: deque[Event] = deque()
|
|
||||||
|
|
||||||
async def __aenter__(self) -> None:
|
|
||||||
await self.acquire()
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> None:
|
|
||||||
self.release()
|
|
||||||
|
|
||||||
def _check_acquired(self) -> None:
|
|
||||||
if self._owner_task != get_current_task():
|
|
||||||
raise RuntimeError("The current task is not holding the underlying lock")
|
|
||||||
|
|
||||||
async def acquire(self) -> None:
|
|
||||||
"""Acquire the underlying lock."""
|
|
||||||
await self._lock.acquire()
|
|
||||||
self._owner_task = get_current_task()
|
|
||||||
|
|
||||||
def acquire_nowait(self) -> None:
|
|
||||||
"""
|
|
||||||
Acquire the underlying lock, without blocking.
|
|
||||||
|
|
||||||
:raises ~anyio.WouldBlock: if the operation would block
|
|
||||||
|
|
||||||
"""
|
|
||||||
self._lock.acquire_nowait()
|
|
||||||
self._owner_task = get_current_task()
|
|
||||||
|
|
||||||
def release(self) -> DeprecatedAwaitable:
|
|
||||||
"""Release the underlying lock."""
|
|
||||||
self._lock.release()
|
|
||||||
return DeprecatedAwaitable(self.release)
|
|
||||||
|
|
||||||
def locked(self) -> bool:
|
|
||||||
"""Return True if the lock is set."""
|
|
||||||
return self._lock.locked()
|
|
||||||
|
|
||||||
def notify(self, n: int = 1) -> None:
|
|
||||||
"""Notify exactly n listeners."""
|
|
||||||
self._check_acquired()
|
|
||||||
for _ in range(n):
|
|
||||||
try:
|
|
||||||
event = self._waiters.popleft()
|
|
||||||
except IndexError:
|
|
||||||
break
|
|
||||||
|
|
||||||
event.set()
|
|
||||||
|
|
||||||
def notify_all(self) -> None:
|
|
||||||
"""Notify all the listeners."""
|
|
||||||
self._check_acquired()
|
|
||||||
for event in self._waiters:
|
|
||||||
event.set()
|
|
||||||
|
|
||||||
self._waiters.clear()
|
|
||||||
|
|
||||||
async def wait(self) -> None:
|
|
||||||
"""Wait for a notification."""
|
|
||||||
await checkpoint()
|
|
||||||
event = Event()
|
|
||||||
self._waiters.append(event)
|
|
||||||
self.release()
|
|
||||||
try:
|
|
||||||
await event.wait()
|
|
||||||
except BaseException:
|
|
||||||
if not event.is_set():
|
|
||||||
self._waiters.remove(event)
|
|
||||||
|
|
||||||
raise
|
|
||||||
finally:
|
|
||||||
with CancelScope(shield=True):
|
|
||||||
await self.acquire()
|
|
||||||
|
|
||||||
def statistics(self) -> ConditionStatistics:
|
|
||||||
"""
|
|
||||||
Return statistics about the current state of this condition.
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
"""
|
|
||||||
return ConditionStatistics(len(self._waiters), self._lock.statistics())
|
|
||||||
|
|
||||||
|
|
||||||
class Semaphore:
|
|
||||||
def __init__(self, initial_value: int, *, max_value: int | None = None):
|
|
||||||
if not isinstance(initial_value, int):
|
|
||||||
raise TypeError("initial_value must be an integer")
|
|
||||||
if initial_value < 0:
|
|
||||||
raise ValueError("initial_value must be >= 0")
|
|
||||||
if max_value is not None:
|
|
||||||
if not isinstance(max_value, int):
|
|
||||||
raise TypeError("max_value must be an integer or None")
|
|
||||||
if max_value < initial_value:
|
|
||||||
raise ValueError(
|
|
||||||
"max_value must be equal to or higher than initial_value"
|
|
||||||
)
|
|
||||||
|
|
||||||
self._value = initial_value
|
|
||||||
self._max_value = max_value
|
|
||||||
self._waiters: deque[Event] = deque()
|
|
||||||
|
|
||||||
async def __aenter__(self) -> Semaphore:
|
|
||||||
await self.acquire()
|
|
||||||
return self
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> None:
|
|
||||||
self.release()
|
|
||||||
|
|
||||||
async def acquire(self) -> None:
|
|
||||||
"""Decrement the semaphore value, blocking if necessary."""
|
|
||||||
await checkpoint_if_cancelled()
|
|
||||||
try:
|
|
||||||
self.acquire_nowait()
|
|
||||||
except WouldBlock:
|
|
||||||
event = Event()
|
|
||||||
self._waiters.append(event)
|
|
||||||
try:
|
|
||||||
await event.wait()
|
|
||||||
except BaseException:
|
|
||||||
if not event.is_set():
|
|
||||||
self._waiters.remove(event)
|
|
||||||
else:
|
|
||||||
self.release()
|
|
||||||
|
|
||||||
raise
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
await cancel_shielded_checkpoint()
|
|
||||||
except BaseException:
|
|
||||||
self.release()
|
|
||||||
raise
|
|
||||||
|
|
||||||
def acquire_nowait(self) -> None:
|
|
||||||
"""
|
|
||||||
Acquire the underlying lock, without blocking.
|
|
||||||
|
|
||||||
:raises ~anyio.WouldBlock: if the operation would block
|
|
||||||
|
|
||||||
"""
|
|
||||||
if self._value == 0:
|
|
||||||
raise WouldBlock
|
|
||||||
|
|
||||||
self._value -= 1
|
|
||||||
|
|
||||||
def release(self) -> DeprecatedAwaitable:
|
|
||||||
"""Increment the semaphore value."""
|
|
||||||
if self._max_value is not None and self._value == self._max_value:
|
|
||||||
raise ValueError("semaphore released too many times")
|
|
||||||
|
|
||||||
if self._waiters:
|
|
||||||
self._waiters.popleft().set()
|
|
||||||
else:
|
|
||||||
self._value += 1
|
|
||||||
|
|
||||||
return DeprecatedAwaitable(self.release)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def value(self) -> int:
|
|
||||||
"""The current value of the semaphore."""
|
|
||||||
return self._value
|
|
||||||
|
|
||||||
@property
|
|
||||||
def max_value(self) -> int | None:
|
|
||||||
"""The maximum value of the semaphore."""
|
|
||||||
return self._max_value
|
|
||||||
|
|
||||||
def statistics(self) -> SemaphoreStatistics:
|
|
||||||
"""
|
|
||||||
Return statistics about the current state of this semaphore.
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
"""
|
|
||||||
return SemaphoreStatistics(len(self._waiters))
|
|
||||||
|
|
||||||
|
|
||||||
class CapacityLimiter:
|
|
||||||
def __new__(cls, total_tokens: float) -> CapacityLimiter:
|
|
||||||
return get_asynclib().CapacityLimiter(total_tokens)
|
|
||||||
|
|
||||||
async def __aenter__(self) -> None:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@property
|
|
||||||
def total_tokens(self) -> float:
|
|
||||||
"""
|
|
||||||
The total number of tokens available for borrowing.
|
|
||||||
|
|
||||||
This is a read-write property. If the total number of tokens is increased, the
|
|
||||||
proportionate number of tasks waiting on this limiter will be granted their tokens.
|
|
||||||
|
|
||||||
.. versionchanged:: 3.0
|
|
||||||
The property is now writable.
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@total_tokens.setter
|
|
||||||
def total_tokens(self, value: float) -> None:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
async def set_total_tokens(self, value: float) -> None:
|
|
||||||
warn(
|
|
||||||
"CapacityLimiter.set_total_tokens has been deprecated. Set the value of the"
|
|
||||||
'"total_tokens" attribute directly.',
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
self.total_tokens = value
|
|
||||||
|
|
||||||
@property
|
|
||||||
def borrowed_tokens(self) -> int:
|
|
||||||
"""The number of tokens that have currently been borrowed."""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@property
|
|
||||||
def available_tokens(self) -> float:
|
|
||||||
"""The number of tokens currently available to be borrowed"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def acquire_nowait(self) -> DeprecatedAwaitable:
|
|
||||||
"""
|
|
||||||
Acquire a token for the current task without waiting for one to become available.
|
|
||||||
|
|
||||||
:raises ~anyio.WouldBlock: if there are no tokens available for borrowing
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable:
|
|
||||||
"""
|
|
||||||
Acquire a token without waiting for one to become available.
|
|
||||||
|
|
||||||
:param borrower: the entity borrowing a token
|
|
||||||
:raises ~anyio.WouldBlock: if there are no tokens available for borrowing
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
async def acquire(self) -> None:
|
|
||||||
"""
|
|
||||||
Acquire a token for the current task, waiting if necessary for one to become available.
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
async def acquire_on_behalf_of(self, borrower: object) -> None:
|
|
||||||
"""
|
|
||||||
Acquire a token, waiting if necessary for one to become available.
|
|
||||||
|
|
||||||
:param borrower: the entity borrowing a token
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def release(self) -> None:
|
|
||||||
"""
|
|
||||||
Release the token held by the current task.
|
|
||||||
:raises RuntimeError: if the current task has not borrowed a token from this limiter.
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def release_on_behalf_of(self, borrower: object) -> None:
|
|
||||||
"""
|
|
||||||
Release the token held by the given borrower.
|
|
||||||
|
|
||||||
:raises RuntimeError: if the borrower has not borrowed a token from this limiter.
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def statistics(self) -> CapacityLimiterStatistics:
|
|
||||||
"""
|
|
||||||
Return statistics about the current state of this limiter.
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
|
|
||||||
def create_lock() -> Lock:
|
|
||||||
"""
|
|
||||||
Create an asynchronous lock.
|
|
||||||
|
|
||||||
:return: a lock object
|
|
||||||
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :class:`~Lock` directly.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn("create_lock() is deprecated -- use Lock() directly", DeprecationWarning)
|
|
||||||
return Lock()
|
|
||||||
|
|
||||||
|
|
||||||
def create_condition(lock: Lock | None = None) -> Condition:
|
|
||||||
"""
|
|
||||||
Create an asynchronous condition.
|
|
||||||
|
|
||||||
:param lock: the lock to base the condition object on
|
|
||||||
:return: a condition object
|
|
||||||
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :class:`~Condition` directly.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn(
|
|
||||||
"create_condition() is deprecated -- use Condition() directly",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return Condition(lock=lock)
|
|
||||||
|
|
||||||
|
|
||||||
def create_event() -> Event:
|
|
||||||
"""
|
|
||||||
Create an asynchronous event object.
|
|
||||||
|
|
||||||
:return: an event object
|
|
||||||
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :class:`~Event` directly.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn("create_event() is deprecated -- use Event() directly", DeprecationWarning)
|
|
||||||
return get_asynclib().Event()
|
|
||||||
|
|
||||||
|
|
||||||
def create_semaphore(value: int, *, max_value: int | None = None) -> Semaphore:
|
|
||||||
"""
|
|
||||||
Create an asynchronous semaphore.
|
|
||||||
|
|
||||||
:param value: the semaphore's initial value
|
|
||||||
:param max_value: if set, makes this a "bounded" semaphore that raises :exc:`ValueError` if the
|
|
||||||
semaphore's value would exceed this number
|
|
||||||
:return: a semaphore object
|
|
||||||
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :class:`~Semaphore` directly.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn(
|
|
||||||
"create_semaphore() is deprecated -- use Semaphore() directly",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return Semaphore(value, max_value=max_value)
|
|
||||||
|
|
||||||
|
|
||||||
def create_capacity_limiter(total_tokens: float) -> CapacityLimiter:
|
|
||||||
"""
|
|
||||||
Create a capacity limiter.
|
|
||||||
|
|
||||||
:param total_tokens: the total number of tokens available for borrowing (can be an integer or
|
|
||||||
:data:`math.inf`)
|
|
||||||
:return: a capacity limiter object
|
|
||||||
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :class:`~CapacityLimiter` directly.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn(
|
|
||||||
"create_capacity_limiter() is deprecated -- use CapacityLimiter() directly",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return get_asynclib().CapacityLimiter(total_tokens)
|
|
||||||
|
|
||||||
|
|
||||||
class ResourceGuard:
|
|
||||||
__slots__ = "action", "_guarded"
|
|
||||||
|
|
||||||
def __init__(self, action: str):
|
|
||||||
self.action = action
|
|
||||||
self._guarded = False
|
|
||||||
|
|
||||||
def __enter__(self) -> None:
|
|
||||||
if self._guarded:
|
|
||||||
raise BusyResourceError(self.action)
|
|
||||||
|
|
||||||
self._guarded = True
|
|
||||||
|
|
||||||
def __exit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
self._guarded = False
|
|
||||||
return None
|
|
||||||
@@ -1,180 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import math
|
|
||||||
from types import TracebackType
|
|
||||||
from warnings import warn
|
|
||||||
|
|
||||||
from ..abc._tasks import TaskGroup, TaskStatus
|
|
||||||
from ._compat import (
|
|
||||||
DeprecatedAsyncContextManager,
|
|
||||||
DeprecatedAwaitable,
|
|
||||||
DeprecatedAwaitableFloat,
|
|
||||||
)
|
|
||||||
from ._eventloop import get_asynclib
|
|
||||||
|
|
||||||
|
|
||||||
class _IgnoredTaskStatus(TaskStatus[object]):
|
|
||||||
def started(self, value: object = None) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
TASK_STATUS_IGNORED = _IgnoredTaskStatus()
|
|
||||||
|
|
||||||
|
|
||||||
class CancelScope(DeprecatedAsyncContextManager["CancelScope"]):
|
|
||||||
"""
|
|
||||||
Wraps a unit of work that can be made separately cancellable.
|
|
||||||
|
|
||||||
:param deadline: The time (clock value) when this scope is cancelled automatically
|
|
||||||
:param shield: ``True`` to shield the cancel scope from external cancellation
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __new__(
|
|
||||||
cls, *, deadline: float = math.inf, shield: bool = False
|
|
||||||
) -> CancelScope:
|
|
||||||
return get_asynclib().CancelScope(shield=shield, deadline=deadline)
|
|
||||||
|
|
||||||
def cancel(self) -> DeprecatedAwaitable:
|
|
||||||
"""Cancel this scope immediately."""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@property
|
|
||||||
def deadline(self) -> float:
|
|
||||||
"""
|
|
||||||
The time (clock value) when this scope is cancelled automatically.
|
|
||||||
|
|
||||||
Will be ``float('inf')`` if no timeout has been set.
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@deadline.setter
|
|
||||||
def deadline(self, value: float) -> None:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@property
|
|
||||||
def cancel_called(self) -> bool:
|
|
||||||
"""``True`` if :meth:`cancel` has been called."""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@property
|
|
||||||
def shield(self) -> bool:
|
|
||||||
"""
|
|
||||||
``True`` if this scope is shielded from external cancellation.
|
|
||||||
|
|
||||||
While a scope is shielded, it will not receive cancellations from outside.
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@shield.setter
|
|
||||||
def shield(self, value: bool) -> None:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def __enter__(self) -> CancelScope:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def __exit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
|
|
||||||
def open_cancel_scope(*, shield: bool = False) -> CancelScope:
|
|
||||||
"""
|
|
||||||
Open a cancel scope.
|
|
||||||
|
|
||||||
:param shield: ``True`` to shield the cancel scope from external cancellation
|
|
||||||
:return: a cancel scope
|
|
||||||
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :class:`~CancelScope` directly.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn(
|
|
||||||
"open_cancel_scope() is deprecated -- use CancelScope() directly",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return get_asynclib().CancelScope(shield=shield)
|
|
||||||
|
|
||||||
|
|
||||||
class FailAfterContextManager(DeprecatedAsyncContextManager[CancelScope]):
|
|
||||||
def __init__(self, cancel_scope: CancelScope):
|
|
||||||
self._cancel_scope = cancel_scope
|
|
||||||
|
|
||||||
def __enter__(self) -> CancelScope:
|
|
||||||
return self._cancel_scope.__enter__()
|
|
||||||
|
|
||||||
def __exit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
retval = self._cancel_scope.__exit__(exc_type, exc_val, exc_tb)
|
|
||||||
if self._cancel_scope.cancel_called:
|
|
||||||
raise TimeoutError
|
|
||||||
|
|
||||||
return retval
|
|
||||||
|
|
||||||
|
|
||||||
def fail_after(delay: float | None, shield: bool = False) -> FailAfterContextManager:
|
|
||||||
"""
|
|
||||||
Create a context manager which raises a :class:`TimeoutError` if does not finish in time.
|
|
||||||
|
|
||||||
:param delay: maximum allowed time (in seconds) before raising the exception, or ``None`` to
|
|
||||||
disable the timeout
|
|
||||||
:param shield: ``True`` to shield the cancel scope from external cancellation
|
|
||||||
:return: a context manager that yields a cancel scope
|
|
||||||
:rtype: :class:`~typing.ContextManager`\\[:class:`~anyio.CancelScope`\\]
|
|
||||||
|
|
||||||
"""
|
|
||||||
deadline = (
|
|
||||||
(get_asynclib().current_time() + delay) if delay is not None else math.inf
|
|
||||||
)
|
|
||||||
cancel_scope = get_asynclib().CancelScope(deadline=deadline, shield=shield)
|
|
||||||
return FailAfterContextManager(cancel_scope)
|
|
||||||
|
|
||||||
|
|
||||||
def move_on_after(delay: float | None, shield: bool = False) -> CancelScope:
|
|
||||||
"""
|
|
||||||
Create a cancel scope with a deadline that expires after the given delay.
|
|
||||||
|
|
||||||
:param delay: maximum allowed time (in seconds) before exiting the context block, or ``None``
|
|
||||||
to disable the timeout
|
|
||||||
:param shield: ``True`` to shield the cancel scope from external cancellation
|
|
||||||
:return: a cancel scope
|
|
||||||
|
|
||||||
"""
|
|
||||||
deadline = (
|
|
||||||
(get_asynclib().current_time() + delay) if delay is not None else math.inf
|
|
||||||
)
|
|
||||||
return get_asynclib().CancelScope(deadline=deadline, shield=shield)
|
|
||||||
|
|
||||||
|
|
||||||
def current_effective_deadline() -> DeprecatedAwaitableFloat:
|
|
||||||
"""
|
|
||||||
Return the nearest deadline among all the cancel scopes effective for the current task.
|
|
||||||
|
|
||||||
:return: a clock value from the event loop's internal clock (or ``float('inf')`` if
|
|
||||||
there is no deadline in effect, or ``float('-inf')`` if the current scope has
|
|
||||||
been cancelled)
|
|
||||||
:rtype: float
|
|
||||||
|
|
||||||
"""
|
|
||||||
return DeprecatedAwaitableFloat(
|
|
||||||
get_asynclib().current_effective_deadline(), current_effective_deadline
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def create_task_group() -> TaskGroup:
|
|
||||||
"""
|
|
||||||
Create a task group.
|
|
||||||
|
|
||||||
:return: a task group
|
|
||||||
|
|
||||||
"""
|
|
||||||
return get_asynclib().TaskGroup()
|
|
||||||
@@ -1,82 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from typing import Any, Awaitable, Generator
|
|
||||||
|
|
||||||
from ._compat import DeprecatedAwaitableList, _warn_deprecation
|
|
||||||
from ._eventloop import get_asynclib
|
|
||||||
|
|
||||||
|
|
||||||
class TaskInfo:
|
|
||||||
"""
|
|
||||||
Represents an asynchronous task.
|
|
||||||
|
|
||||||
:ivar int id: the unique identifier of the task
|
|
||||||
:ivar parent_id: the identifier of the parent task, if any
|
|
||||||
:vartype parent_id: Optional[int]
|
|
||||||
:ivar str name: the description of the task (if any)
|
|
||||||
:ivar ~collections.abc.Coroutine coro: the coroutine object of the task
|
|
||||||
"""
|
|
||||||
|
|
||||||
__slots__ = "_name", "id", "parent_id", "name", "coro"
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
id: int,
|
|
||||||
parent_id: int | None,
|
|
||||||
name: str | None,
|
|
||||||
coro: Generator[Any, Any, Any] | Awaitable[Any],
|
|
||||||
):
|
|
||||||
func = get_current_task
|
|
||||||
self._name = f"{func.__module__}.{func.__qualname__}"
|
|
||||||
self.id: int = id
|
|
||||||
self.parent_id: int | None = parent_id
|
|
||||||
self.name: str | None = name
|
|
||||||
self.coro: Generator[Any, Any, Any] | Awaitable[Any] = coro
|
|
||||||
|
|
||||||
def __eq__(self, other: object) -> bool:
|
|
||||||
if isinstance(other, TaskInfo):
|
|
||||||
return self.id == other.id
|
|
||||||
|
|
||||||
return NotImplemented
|
|
||||||
|
|
||||||
def __hash__(self) -> int:
|
|
||||||
return hash(self.id)
|
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
return f"{self.__class__.__name__}(id={self.id!r}, name={self.name!r})"
|
|
||||||
|
|
||||||
def __await__(self) -> Generator[None, None, TaskInfo]:
|
|
||||||
_warn_deprecation(self)
|
|
||||||
if False:
|
|
||||||
yield
|
|
||||||
|
|
||||||
return self
|
|
||||||
|
|
||||||
def _unwrap(self) -> TaskInfo:
|
|
||||||
return self
|
|
||||||
|
|
||||||
|
|
||||||
def get_current_task() -> TaskInfo:
|
|
||||||
"""
|
|
||||||
Return the current task.
|
|
||||||
|
|
||||||
:return: a representation of the current task
|
|
||||||
|
|
||||||
"""
|
|
||||||
return get_asynclib().get_current_task()
|
|
||||||
|
|
||||||
|
|
||||||
def get_running_tasks() -> DeprecatedAwaitableList[TaskInfo]:
|
|
||||||
"""
|
|
||||||
Return a list of running tasks in the current event loop.
|
|
||||||
|
|
||||||
:return: a list of task info objects
|
|
||||||
|
|
||||||
"""
|
|
||||||
tasks = get_asynclib().get_running_tasks()
|
|
||||||
return DeprecatedAwaitableList(tasks, func=get_running_tasks)
|
|
||||||
|
|
||||||
|
|
||||||
async def wait_all_tasks_blocked() -> None:
|
|
||||||
"""Wait until all other tasks are waiting for something."""
|
|
||||||
await get_asynclib().wait_all_tasks_blocked()
|
|
||||||
@@ -1,83 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from typing import Any, Callable, Mapping, TypeVar, overload
|
|
||||||
|
|
||||||
from ._exceptions import TypedAttributeLookupError
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 8):
|
|
||||||
from typing import final
|
|
||||||
else:
|
|
||||||
from typing_extensions import final
|
|
||||||
|
|
||||||
T_Attr = TypeVar("T_Attr")
|
|
||||||
T_Default = TypeVar("T_Default")
|
|
||||||
undefined = object()
|
|
||||||
|
|
||||||
|
|
||||||
def typed_attribute() -> Any:
|
|
||||||
"""Return a unique object, used to mark typed attributes."""
|
|
||||||
return object()
|
|
||||||
|
|
||||||
|
|
||||||
class TypedAttributeSet:
|
|
||||||
"""
|
|
||||||
Superclass for typed attribute collections.
|
|
||||||
|
|
||||||
Checks that every public attribute of every subclass has a type annotation.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init_subclass__(cls) -> None:
|
|
||||||
annotations: dict[str, Any] = getattr(cls, "__annotations__", {})
|
|
||||||
for attrname in dir(cls):
|
|
||||||
if not attrname.startswith("_") and attrname not in annotations:
|
|
||||||
raise TypeError(
|
|
||||||
f"Attribute {attrname!r} is missing its type annotation"
|
|
||||||
)
|
|
||||||
|
|
||||||
super().__init_subclass__()
|
|
||||||
|
|
||||||
|
|
||||||
class TypedAttributeProvider:
|
|
||||||
"""Base class for classes that wish to provide typed extra attributes."""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def extra_attributes(self) -> Mapping[T_Attr, Callable[[], T_Attr]]:
|
|
||||||
"""
|
|
||||||
A mapping of the extra attributes to callables that return the corresponding values.
|
|
||||||
|
|
||||||
If the provider wraps another provider, the attributes from that wrapper should also be
|
|
||||||
included in the returned mapping (but the wrapper may override the callables from the
|
|
||||||
wrapped instance).
|
|
||||||
|
|
||||||
"""
|
|
||||||
return {}
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def extra(self, attribute: T_Attr) -> T_Attr:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def extra(self, attribute: T_Attr, default: T_Default) -> T_Attr | T_Default:
|
|
||||||
...
|
|
||||||
|
|
||||||
@final
|
|
||||||
def extra(self, attribute: Any, default: object = undefined) -> object:
|
|
||||||
"""
|
|
||||||
extra(attribute, default=undefined)
|
|
||||||
|
|
||||||
Return the value of the given typed extra attribute.
|
|
||||||
|
|
||||||
:param attribute: the attribute (member of a :class:`~TypedAttributeSet`) to look for
|
|
||||||
:param default: the value that should be returned if no value is found for the attribute
|
|
||||||
:raises ~anyio.TypedAttributeLookupError: if the search failed and no default value was
|
|
||||||
given
|
|
||||||
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
return self.extra_attributes[attribute]()
|
|
||||||
except KeyError:
|
|
||||||
if default is undefined:
|
|
||||||
raise TypedAttributeLookupError("Attribute not found") from None
|
|
||||||
else:
|
|
||||||
return default
|
|
||||||
@@ -1,90 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
__all__ = (
|
|
||||||
"AsyncResource",
|
|
||||||
"IPAddressType",
|
|
||||||
"IPSockAddrType",
|
|
||||||
"SocketAttribute",
|
|
||||||
"SocketStream",
|
|
||||||
"SocketListener",
|
|
||||||
"UDPSocket",
|
|
||||||
"UNIXSocketStream",
|
|
||||||
"UDPPacketType",
|
|
||||||
"ConnectedUDPSocket",
|
|
||||||
"UnreliableObjectReceiveStream",
|
|
||||||
"UnreliableObjectSendStream",
|
|
||||||
"UnreliableObjectStream",
|
|
||||||
"ObjectReceiveStream",
|
|
||||||
"ObjectSendStream",
|
|
||||||
"ObjectStream",
|
|
||||||
"ByteReceiveStream",
|
|
||||||
"ByteSendStream",
|
|
||||||
"ByteStream",
|
|
||||||
"AnyUnreliableByteReceiveStream",
|
|
||||||
"AnyUnreliableByteSendStream",
|
|
||||||
"AnyUnreliableByteStream",
|
|
||||||
"AnyByteReceiveStream",
|
|
||||||
"AnyByteSendStream",
|
|
||||||
"AnyByteStream",
|
|
||||||
"Listener",
|
|
||||||
"Process",
|
|
||||||
"Event",
|
|
||||||
"Condition",
|
|
||||||
"Lock",
|
|
||||||
"Semaphore",
|
|
||||||
"CapacityLimiter",
|
|
||||||
"CancelScope",
|
|
||||||
"TaskGroup",
|
|
||||||
"TaskStatus",
|
|
||||||
"TestRunner",
|
|
||||||
"BlockingPortal",
|
|
||||||
)
|
|
||||||
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from ._resources import AsyncResource
|
|
||||||
from ._sockets import (
|
|
||||||
ConnectedUDPSocket,
|
|
||||||
IPAddressType,
|
|
||||||
IPSockAddrType,
|
|
||||||
SocketAttribute,
|
|
||||||
SocketListener,
|
|
||||||
SocketStream,
|
|
||||||
UDPPacketType,
|
|
||||||
UDPSocket,
|
|
||||||
UNIXSocketStream,
|
|
||||||
)
|
|
||||||
from ._streams import (
|
|
||||||
AnyByteReceiveStream,
|
|
||||||
AnyByteSendStream,
|
|
||||||
AnyByteStream,
|
|
||||||
AnyUnreliableByteReceiveStream,
|
|
||||||
AnyUnreliableByteSendStream,
|
|
||||||
AnyUnreliableByteStream,
|
|
||||||
ByteReceiveStream,
|
|
||||||
ByteSendStream,
|
|
||||||
ByteStream,
|
|
||||||
Listener,
|
|
||||||
ObjectReceiveStream,
|
|
||||||
ObjectSendStream,
|
|
||||||
ObjectStream,
|
|
||||||
UnreliableObjectReceiveStream,
|
|
||||||
UnreliableObjectSendStream,
|
|
||||||
UnreliableObjectStream,
|
|
||||||
)
|
|
||||||
from ._subprocesses import Process
|
|
||||||
from ._tasks import TaskGroup, TaskStatus
|
|
||||||
from ._testing import TestRunner
|
|
||||||
|
|
||||||
# Re-exported here, for backwards compatibility
|
|
||||||
# isort: off
|
|
||||||
from .._core._synchronization import CapacityLimiter, Condition, Event, Lock, Semaphore
|
|
||||||
from .._core._tasks import CancelScope
|
|
||||||
from ..from_thread import BlockingPortal
|
|
||||||
|
|
||||||
# Re-export imports so they look like they live directly in this package
|
|
||||||
key: str
|
|
||||||
value: Any
|
|
||||||
for key, value in list(locals().items()):
|
|
||||||
if getattr(value, "__module__", "").startswith("anyio.abc."):
|
|
||||||
value.__module__ = __name__
|
|
||||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,31 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from abc import ABCMeta, abstractmethod
|
|
||||||
from types import TracebackType
|
|
||||||
from typing import TypeVar
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
class AsyncResource(metaclass=ABCMeta):
|
|
||||||
"""
|
|
||||||
Abstract base class for all closeable asynchronous resources.
|
|
||||||
|
|
||||||
Works as an asynchronous context manager which returns the instance itself on enter, and calls
|
|
||||||
:meth:`aclose` on exit.
|
|
||||||
"""
|
|
||||||
|
|
||||||
async def __aenter__(self: T) -> T:
|
|
||||||
return self
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> None:
|
|
||||||
await self.aclose()
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def aclose(self) -> None:
|
|
||||||
"""Close the resource."""
|
|
||||||
@@ -1,160 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import socket
|
|
||||||
from abc import abstractmethod
|
|
||||||
from contextlib import AsyncExitStack
|
|
||||||
from io import IOBase
|
|
||||||
from ipaddress import IPv4Address, IPv6Address
|
|
||||||
from socket import AddressFamily
|
|
||||||
from typing import (
|
|
||||||
Any,
|
|
||||||
Callable,
|
|
||||||
Collection,
|
|
||||||
Mapping,
|
|
||||||
Tuple,
|
|
||||||
TypeVar,
|
|
||||||
Union,
|
|
||||||
)
|
|
||||||
|
|
||||||
from .._core._tasks import create_task_group
|
|
||||||
from .._core._typedattr import (
|
|
||||||
TypedAttributeProvider,
|
|
||||||
TypedAttributeSet,
|
|
||||||
typed_attribute,
|
|
||||||
)
|
|
||||||
from ._streams import ByteStream, Listener, UnreliableObjectStream
|
|
||||||
from ._tasks import TaskGroup
|
|
||||||
|
|
||||||
IPAddressType = Union[str, IPv4Address, IPv6Address]
|
|
||||||
IPSockAddrType = Tuple[str, int]
|
|
||||||
SockAddrType = Union[IPSockAddrType, str]
|
|
||||||
UDPPacketType = Tuple[bytes, IPSockAddrType]
|
|
||||||
T_Retval = TypeVar("T_Retval")
|
|
||||||
|
|
||||||
|
|
||||||
class SocketAttribute(TypedAttributeSet):
|
|
||||||
#: the address family of the underlying socket
|
|
||||||
family: AddressFamily = typed_attribute()
|
|
||||||
#: the local socket address of the underlying socket
|
|
||||||
local_address: SockAddrType = typed_attribute()
|
|
||||||
#: for IP addresses, the local port the underlying socket is bound to
|
|
||||||
local_port: int = typed_attribute()
|
|
||||||
#: the underlying stdlib socket object
|
|
||||||
raw_socket: socket.socket = typed_attribute()
|
|
||||||
#: the remote address the underlying socket is connected to
|
|
||||||
remote_address: SockAddrType = typed_attribute()
|
|
||||||
#: for IP addresses, the remote port the underlying socket is connected to
|
|
||||||
remote_port: int = typed_attribute()
|
|
||||||
|
|
||||||
|
|
||||||
class _SocketProvider(TypedAttributeProvider):
|
|
||||||
@property
|
|
||||||
def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
|
|
||||||
from .._core._sockets import convert_ipv6_sockaddr as convert
|
|
||||||
|
|
||||||
attributes: dict[Any, Callable[[], Any]] = {
|
|
||||||
SocketAttribute.family: lambda: self._raw_socket.family,
|
|
||||||
SocketAttribute.local_address: lambda: convert(
|
|
||||||
self._raw_socket.getsockname()
|
|
||||||
),
|
|
||||||
SocketAttribute.raw_socket: lambda: self._raw_socket,
|
|
||||||
}
|
|
||||||
try:
|
|
||||||
peername: tuple[str, int] | None = convert(self._raw_socket.getpeername())
|
|
||||||
except OSError:
|
|
||||||
peername = None
|
|
||||||
|
|
||||||
# Provide the remote address for connected sockets
|
|
||||||
if peername is not None:
|
|
||||||
attributes[SocketAttribute.remote_address] = lambda: peername
|
|
||||||
|
|
||||||
# Provide local and remote ports for IP based sockets
|
|
||||||
if self._raw_socket.family in (AddressFamily.AF_INET, AddressFamily.AF_INET6):
|
|
||||||
attributes[
|
|
||||||
SocketAttribute.local_port
|
|
||||||
] = lambda: self._raw_socket.getsockname()[1]
|
|
||||||
if peername is not None:
|
|
||||||
remote_port = peername[1]
|
|
||||||
attributes[SocketAttribute.remote_port] = lambda: remote_port
|
|
||||||
|
|
||||||
return attributes
|
|
||||||
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def _raw_socket(self) -> socket.socket:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class SocketStream(ByteStream, _SocketProvider):
|
|
||||||
"""
|
|
||||||
Transports bytes over a socket.
|
|
||||||
|
|
||||||
Supports all relevant extra attributes from :class:`~SocketAttribute`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class UNIXSocketStream(SocketStream):
|
|
||||||
@abstractmethod
|
|
||||||
async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None:
|
|
||||||
"""
|
|
||||||
Send file descriptors along with a message to the peer.
|
|
||||||
|
|
||||||
:param message: a non-empty bytestring
|
|
||||||
:param fds: a collection of files (either numeric file descriptors or open file or socket
|
|
||||||
objects)
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]:
|
|
||||||
"""
|
|
||||||
Receive file descriptors along with a message from the peer.
|
|
||||||
|
|
||||||
:param msglen: length of the message to expect from the peer
|
|
||||||
:param maxfds: maximum number of file descriptors to expect from the peer
|
|
||||||
:return: a tuple of (message, file descriptors)
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class SocketListener(Listener[SocketStream], _SocketProvider):
|
|
||||||
"""
|
|
||||||
Listens to incoming socket connections.
|
|
||||||
|
|
||||||
Supports all relevant extra attributes from :class:`~SocketAttribute`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def accept(self) -> SocketStream:
|
|
||||||
"""Accept an incoming connection."""
|
|
||||||
|
|
||||||
async def serve(
|
|
||||||
self,
|
|
||||||
handler: Callable[[SocketStream], Any],
|
|
||||||
task_group: TaskGroup | None = None,
|
|
||||||
) -> None:
|
|
||||||
async with AsyncExitStack() as exit_stack:
|
|
||||||
if task_group is None:
|
|
||||||
task_group = await exit_stack.enter_async_context(create_task_group())
|
|
||||||
|
|
||||||
while True:
|
|
||||||
stream = await self.accept()
|
|
||||||
task_group.start_soon(handler, stream)
|
|
||||||
|
|
||||||
|
|
||||||
class UDPSocket(UnreliableObjectStream[UDPPacketType], _SocketProvider):
|
|
||||||
"""
|
|
||||||
Represents an unconnected UDP socket.
|
|
||||||
|
|
||||||
Supports all relevant extra attributes from :class:`~SocketAttribute`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
async def sendto(self, data: bytes, host: str, port: int) -> None:
|
|
||||||
"""Alias for :meth:`~.UnreliableObjectSendStream.send` ((data, (host, port)))."""
|
|
||||||
return await self.send((data, (host, port)))
|
|
||||||
|
|
||||||
|
|
||||||
class ConnectedUDPSocket(UnreliableObjectStream[bytes], _SocketProvider):
|
|
||||||
"""
|
|
||||||
Represents an connected UDP socket.
|
|
||||||
|
|
||||||
Supports all relevant extra attributes from :class:`~SocketAttribute`.
|
|
||||||
"""
|
|
||||||
@@ -1,203 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from abc import abstractmethod
|
|
||||||
from typing import Any, Callable, Generic, TypeVar, Union
|
|
||||||
|
|
||||||
from .._core._exceptions import EndOfStream
|
|
||||||
from .._core._typedattr import TypedAttributeProvider
|
|
||||||
from ._resources import AsyncResource
|
|
||||||
from ._tasks import TaskGroup
|
|
||||||
|
|
||||||
T_Item = TypeVar("T_Item")
|
|
||||||
T_co = TypeVar("T_co", covariant=True)
|
|
||||||
T_contra = TypeVar("T_contra", contravariant=True)
|
|
||||||
|
|
||||||
|
|
||||||
class UnreliableObjectReceiveStream(
|
|
||||||
Generic[T_co], AsyncResource, TypedAttributeProvider
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
An interface for receiving objects.
|
|
||||||
|
|
||||||
This interface makes no guarantees that the received messages arrive in the order in which they
|
|
||||||
were sent, or that no messages are missed.
|
|
||||||
|
|
||||||
Asynchronously iterating over objects of this type will yield objects matching the given type
|
|
||||||
parameter.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __aiter__(self) -> UnreliableObjectReceiveStream[T_co]:
|
|
||||||
return self
|
|
||||||
|
|
||||||
async def __anext__(self) -> T_co:
|
|
||||||
try:
|
|
||||||
return await self.receive()
|
|
||||||
except EndOfStream:
|
|
||||||
raise StopAsyncIteration
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def receive(self) -> T_co:
|
|
||||||
"""
|
|
||||||
Receive the next item.
|
|
||||||
|
|
||||||
:raises ~anyio.ClosedResourceError: if the receive stream has been explicitly
|
|
||||||
closed
|
|
||||||
:raises ~anyio.EndOfStream: if this stream has been closed from the other end
|
|
||||||
:raises ~anyio.BrokenResourceError: if this stream has been rendered unusable
|
|
||||||
due to external causes
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class UnreliableObjectSendStream(
|
|
||||||
Generic[T_contra], AsyncResource, TypedAttributeProvider
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
An interface for sending objects.
|
|
||||||
|
|
||||||
This interface makes no guarantees that the messages sent will reach the recipient(s) in the
|
|
||||||
same order in which they were sent, or at all.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def send(self, item: T_contra) -> None:
|
|
||||||
"""
|
|
||||||
Send an item to the peer(s).
|
|
||||||
|
|
||||||
:param item: the item to send
|
|
||||||
:raises ~anyio.ClosedResourceError: if the send stream has been explicitly
|
|
||||||
closed
|
|
||||||
:raises ~anyio.BrokenResourceError: if this stream has been rendered unusable
|
|
||||||
due to external causes
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class UnreliableObjectStream(
|
|
||||||
UnreliableObjectReceiveStream[T_Item], UnreliableObjectSendStream[T_Item]
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
A bidirectional message stream which does not guarantee the order or reliability of message
|
|
||||||
delivery.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ObjectReceiveStream(UnreliableObjectReceiveStream[T_co]):
|
|
||||||
"""
|
|
||||||
A receive message stream which guarantees that messages are received in the same order in
|
|
||||||
which they were sent, and that no messages are missed.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ObjectSendStream(UnreliableObjectSendStream[T_contra]):
|
|
||||||
"""
|
|
||||||
A send message stream which guarantees that messages are delivered in the same order in which
|
|
||||||
they were sent, without missing any messages in the middle.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ObjectStream(
|
|
||||||
ObjectReceiveStream[T_Item],
|
|
||||||
ObjectSendStream[T_Item],
|
|
||||||
UnreliableObjectStream[T_Item],
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
A bidirectional message stream which guarantees the order and reliability of message delivery.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def send_eof(self) -> None:
|
|
||||||
"""
|
|
||||||
Send an end-of-file indication to the peer.
|
|
||||||
|
|
||||||
You should not try to send any further data to this stream after calling this method.
|
|
||||||
This method is idempotent (does nothing on successive calls).
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ByteReceiveStream(AsyncResource, TypedAttributeProvider):
|
|
||||||
"""
|
|
||||||
An interface for receiving bytes from a single peer.
|
|
||||||
|
|
||||||
Iterating this byte stream will yield a byte string of arbitrary length, but no more than
|
|
||||||
65536 bytes.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __aiter__(self) -> ByteReceiveStream:
|
|
||||||
return self
|
|
||||||
|
|
||||||
async def __anext__(self) -> bytes:
|
|
||||||
try:
|
|
||||||
return await self.receive()
|
|
||||||
except EndOfStream:
|
|
||||||
raise StopAsyncIteration
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def receive(self, max_bytes: int = 65536) -> bytes:
|
|
||||||
"""
|
|
||||||
Receive at most ``max_bytes`` bytes from the peer.
|
|
||||||
|
|
||||||
.. note:: Implementors of this interface should not return an empty :class:`bytes` object,
|
|
||||||
and users should ignore them.
|
|
||||||
|
|
||||||
:param max_bytes: maximum number of bytes to receive
|
|
||||||
:return: the received bytes
|
|
||||||
:raises ~anyio.EndOfStream: if this stream has been closed from the other end
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ByteSendStream(AsyncResource, TypedAttributeProvider):
|
|
||||||
"""An interface for sending bytes to a single peer."""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def send(self, item: bytes) -> None:
|
|
||||||
"""
|
|
||||||
Send the given bytes to the peer.
|
|
||||||
|
|
||||||
:param item: the bytes to send
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ByteStream(ByteReceiveStream, ByteSendStream):
|
|
||||||
"""A bidirectional byte stream."""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def send_eof(self) -> None:
|
|
||||||
"""
|
|
||||||
Send an end-of-file indication to the peer.
|
|
||||||
|
|
||||||
You should not try to send any further data to this stream after calling this method.
|
|
||||||
This method is idempotent (does nothing on successive calls).
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
#: Type alias for all unreliable bytes-oriented receive streams.
|
|
||||||
AnyUnreliableByteReceiveStream = Union[
|
|
||||||
UnreliableObjectReceiveStream[bytes], ByteReceiveStream
|
|
||||||
]
|
|
||||||
#: Type alias for all unreliable bytes-oriented send streams.
|
|
||||||
AnyUnreliableByteSendStream = Union[UnreliableObjectSendStream[bytes], ByteSendStream]
|
|
||||||
#: Type alias for all unreliable bytes-oriented streams.
|
|
||||||
AnyUnreliableByteStream = Union[UnreliableObjectStream[bytes], ByteStream]
|
|
||||||
#: Type alias for all bytes-oriented receive streams.
|
|
||||||
AnyByteReceiveStream = Union[ObjectReceiveStream[bytes], ByteReceiveStream]
|
|
||||||
#: Type alias for all bytes-oriented send streams.
|
|
||||||
AnyByteSendStream = Union[ObjectSendStream[bytes], ByteSendStream]
|
|
||||||
#: Type alias for all bytes-oriented streams.
|
|
||||||
AnyByteStream = Union[ObjectStream[bytes], ByteStream]
|
|
||||||
|
|
||||||
|
|
||||||
class Listener(Generic[T_co], AsyncResource, TypedAttributeProvider):
|
|
||||||
"""An interface for objects that let you accept incoming connections."""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def serve(
|
|
||||||
self,
|
|
||||||
handler: Callable[[T_co], Any],
|
|
||||||
task_group: TaskGroup | None = None,
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
Accept incoming connections as they come in and start tasks to handle them.
|
|
||||||
|
|
||||||
:param handler: a callable that will be used to handle each accepted connection
|
|
||||||
:param task_group: the task group that will be used to start tasks for handling each
|
|
||||||
accepted connection (if omitted, an ad-hoc task group will be created)
|
|
||||||
"""
|
|
||||||
@@ -1,79 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from abc import abstractmethod
|
|
||||||
from signal import Signals
|
|
||||||
|
|
||||||
from ._resources import AsyncResource
|
|
||||||
from ._streams import ByteReceiveStream, ByteSendStream
|
|
||||||
|
|
||||||
|
|
||||||
class Process(AsyncResource):
|
|
||||||
"""An asynchronous version of :class:`subprocess.Popen`."""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def wait(self) -> int:
|
|
||||||
"""
|
|
||||||
Wait until the process exits.
|
|
||||||
|
|
||||||
:return: the exit code of the process
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def terminate(self) -> None:
|
|
||||||
"""
|
|
||||||
Terminates the process, gracefully if possible.
|
|
||||||
|
|
||||||
On Windows, this calls ``TerminateProcess()``.
|
|
||||||
On POSIX systems, this sends ``SIGTERM`` to the process.
|
|
||||||
|
|
||||||
.. seealso:: :meth:`subprocess.Popen.terminate`
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def kill(self) -> None:
|
|
||||||
"""
|
|
||||||
Kills the process.
|
|
||||||
|
|
||||||
On Windows, this calls ``TerminateProcess()``.
|
|
||||||
On POSIX systems, this sends ``SIGKILL`` to the process.
|
|
||||||
|
|
||||||
.. seealso:: :meth:`subprocess.Popen.kill`
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def send_signal(self, signal: Signals) -> None:
|
|
||||||
"""
|
|
||||||
Send a signal to the subprocess.
|
|
||||||
|
|
||||||
.. seealso:: :meth:`subprocess.Popen.send_signal`
|
|
||||||
|
|
||||||
:param signal: the signal number (e.g. :data:`signal.SIGHUP`)
|
|
||||||
"""
|
|
||||||
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def pid(self) -> int:
|
|
||||||
"""The process ID of the process."""
|
|
||||||
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def returncode(self) -> int | None:
|
|
||||||
"""
|
|
||||||
The return code of the process. If the process has not yet terminated, this will be
|
|
||||||
``None``.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def stdin(self) -> ByteSendStream | None:
|
|
||||||
"""The stream for the standard input of the process."""
|
|
||||||
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def stdout(self) -> ByteReceiveStream | None:
|
|
||||||
"""The stream for the standard output of the process."""
|
|
||||||
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def stderr(self) -> ByteReceiveStream | None:
|
|
||||||
"""The stream for the standard error output of the process."""
|
|
||||||
@@ -1,119 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from abc import ABCMeta, abstractmethod
|
|
||||||
from types import TracebackType
|
|
||||||
from typing import TYPE_CHECKING, Any, Awaitable, Callable, TypeVar, overload
|
|
||||||
from warnings import warn
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 8):
|
|
||||||
from typing import Protocol
|
|
||||||
else:
|
|
||||||
from typing_extensions import Protocol
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from anyio._core._tasks import CancelScope
|
|
||||||
|
|
||||||
T_Retval = TypeVar("T_Retval")
|
|
||||||
T_contra = TypeVar("T_contra", contravariant=True)
|
|
||||||
|
|
||||||
|
|
||||||
class TaskStatus(Protocol[T_contra]):
|
|
||||||
@overload
|
|
||||||
def started(self: TaskStatus[None]) -> None:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def started(self, value: T_contra) -> None:
|
|
||||||
...
|
|
||||||
|
|
||||||
def started(self, value: T_contra | None = None) -> None:
|
|
||||||
"""
|
|
||||||
Signal that the task has started.
|
|
||||||
|
|
||||||
:param value: object passed back to the starter of the task
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class TaskGroup(metaclass=ABCMeta):
|
|
||||||
"""
|
|
||||||
Groups several asynchronous tasks together.
|
|
||||||
|
|
||||||
:ivar cancel_scope: the cancel scope inherited by all child tasks
|
|
||||||
:vartype cancel_scope: CancelScope
|
|
||||||
"""
|
|
||||||
|
|
||||||
cancel_scope: CancelScope
|
|
||||||
|
|
||||||
async def spawn(
|
|
||||||
self,
|
|
||||||
func: Callable[..., Awaitable[Any]],
|
|
||||||
*args: object,
|
|
||||||
name: object = None,
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
Start a new task in this task group.
|
|
||||||
|
|
||||||
:param func: a coroutine function
|
|
||||||
:param args: positional arguments to call the function with
|
|
||||||
:param name: name of the task, for the purposes of introspection and debugging
|
|
||||||
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :meth:`start_soon` instead. If your code needs AnyIO 2 compatibility, you
|
|
||||||
can keep using this until AnyIO 4.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn(
|
|
||||||
'spawn() is deprecated -- use start_soon() (without the "await") instead',
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
self.start_soon(func, *args, name=name)
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def start_soon(
|
|
||||||
self,
|
|
||||||
func: Callable[..., Awaitable[Any]],
|
|
||||||
*args: object,
|
|
||||||
name: object = None,
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
Start a new task in this task group.
|
|
||||||
|
|
||||||
:param func: a coroutine function
|
|
||||||
:param args: positional arguments to call the function with
|
|
||||||
:param name: name of the task, for the purposes of introspection and debugging
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def start(
|
|
||||||
self,
|
|
||||||
func: Callable[..., Awaitable[Any]],
|
|
||||||
*args: object,
|
|
||||||
name: object = None,
|
|
||||||
) -> Any:
|
|
||||||
"""
|
|
||||||
Start a new task and wait until it signals for readiness.
|
|
||||||
|
|
||||||
:param func: a coroutine function
|
|
||||||
:param args: positional arguments to call the function with
|
|
||||||
:param name: name of the task, for the purposes of introspection and debugging
|
|
||||||
:return: the value passed to ``task_status.started()``
|
|
||||||
:raises RuntimeError: if the task finishes without calling ``task_status.started()``
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def __aenter__(self) -> TaskGroup:
|
|
||||||
"""Enter the task group context and allow starting new tasks."""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
"""Exit the task group context waiting for all tasks to finish."""
|
|
||||||
@@ -1,70 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import types
|
|
||||||
from abc import ABCMeta, abstractmethod
|
|
||||||
from collections.abc import AsyncGenerator, Iterable
|
|
||||||
from typing import Any, Callable, Coroutine, TypeVar
|
|
||||||
|
|
||||||
_T = TypeVar("_T")
|
|
||||||
|
|
||||||
|
|
||||||
class TestRunner(metaclass=ABCMeta):
|
|
||||||
"""
|
|
||||||
Encapsulates a running event loop. Every call made through this object will use the same event
|
|
||||||
loop.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __enter__(self) -> TestRunner:
|
|
||||||
return self
|
|
||||||
|
|
||||||
def __exit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: types.TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
self.close()
|
|
||||||
return None
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def close(self) -> None:
|
|
||||||
"""Close the event loop."""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def run_asyncgen_fixture(
|
|
||||||
self,
|
|
||||||
fixture_func: Callable[..., AsyncGenerator[_T, Any]],
|
|
||||||
kwargs: dict[str, Any],
|
|
||||||
) -> Iterable[_T]:
|
|
||||||
"""
|
|
||||||
Run an async generator fixture.
|
|
||||||
|
|
||||||
:param fixture_func: the fixture function
|
|
||||||
:param kwargs: keyword arguments to call the fixture function with
|
|
||||||
:return: an iterator yielding the value yielded from the async generator
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def run_fixture(
|
|
||||||
self,
|
|
||||||
fixture_func: Callable[..., Coroutine[Any, Any, _T]],
|
|
||||||
kwargs: dict[str, Any],
|
|
||||||
) -> _T:
|
|
||||||
"""
|
|
||||||
Run an async fixture.
|
|
||||||
|
|
||||||
:param fixture_func: the fixture function
|
|
||||||
:param kwargs: keyword arguments to call the fixture function with
|
|
||||||
:return: the return value of the fixture function
|
|
||||||
"""
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def run_test(
|
|
||||||
self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any]
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
Run an async test function.
|
|
||||||
|
|
||||||
:param test_func: the test function
|
|
||||||
:param kwargs: keyword arguments to call the test function with
|
|
||||||
"""
|
|
||||||
@@ -1,500 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import threading
|
|
||||||
from asyncio import iscoroutine
|
|
||||||
from concurrent.futures import FIRST_COMPLETED, Future, ThreadPoolExecutor, wait
|
|
||||||
from contextlib import AbstractContextManager, contextmanager
|
|
||||||
from types import TracebackType
|
|
||||||
from typing import (
|
|
||||||
Any,
|
|
||||||
AsyncContextManager,
|
|
||||||
Awaitable,
|
|
||||||
Callable,
|
|
||||||
ContextManager,
|
|
||||||
Generator,
|
|
||||||
Generic,
|
|
||||||
Iterable,
|
|
||||||
TypeVar,
|
|
||||||
cast,
|
|
||||||
overload,
|
|
||||||
)
|
|
||||||
from warnings import warn
|
|
||||||
|
|
||||||
from ._core import _eventloop
|
|
||||||
from ._core._eventloop import get_asynclib, get_cancelled_exc_class, threadlocals
|
|
||||||
from ._core._synchronization import Event
|
|
||||||
from ._core._tasks import CancelScope, create_task_group
|
|
||||||
from .abc._tasks import TaskStatus
|
|
||||||
|
|
||||||
T_Retval = TypeVar("T_Retval")
|
|
||||||
T_co = TypeVar("T_co")
|
|
||||||
|
|
||||||
|
|
||||||
def run(func: Callable[..., Awaitable[T_Retval]], *args: object) -> T_Retval:
|
|
||||||
"""
|
|
||||||
Call a coroutine function from a worker thread.
|
|
||||||
|
|
||||||
:param func: a coroutine function
|
|
||||||
:param args: positional arguments for the callable
|
|
||||||
:return: the return value of the coroutine function
|
|
||||||
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
asynclib = threadlocals.current_async_module
|
|
||||||
except AttributeError:
|
|
||||||
raise RuntimeError("This function can only be run from an AnyIO worker thread")
|
|
||||||
|
|
||||||
return asynclib.run_async_from_thread(func, *args)
|
|
||||||
|
|
||||||
|
|
||||||
def run_async_from_thread(
|
|
||||||
func: Callable[..., Awaitable[T_Retval]], *args: object
|
|
||||||
) -> T_Retval:
|
|
||||||
warn(
|
|
||||||
"run_async_from_thread() has been deprecated, use anyio.from_thread.run() instead",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return run(func, *args)
|
|
||||||
|
|
||||||
|
|
||||||
def run_sync(func: Callable[..., T_Retval], *args: object) -> T_Retval:
|
|
||||||
"""
|
|
||||||
Call a function in the event loop thread from a worker thread.
|
|
||||||
|
|
||||||
:param func: a callable
|
|
||||||
:param args: positional arguments for the callable
|
|
||||||
:return: the return value of the callable
|
|
||||||
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
asynclib = threadlocals.current_async_module
|
|
||||||
except AttributeError:
|
|
||||||
raise RuntimeError("This function can only be run from an AnyIO worker thread")
|
|
||||||
|
|
||||||
return asynclib.run_sync_from_thread(func, *args)
|
|
||||||
|
|
||||||
|
|
||||||
def run_sync_from_thread(func: Callable[..., T_Retval], *args: object) -> T_Retval:
|
|
||||||
warn(
|
|
||||||
"run_sync_from_thread() has been deprecated, use anyio.from_thread.run_sync() instead",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return run_sync(func, *args)
|
|
||||||
|
|
||||||
|
|
||||||
class _BlockingAsyncContextManager(Generic[T_co], AbstractContextManager):
|
|
||||||
_enter_future: Future
|
|
||||||
_exit_future: Future
|
|
||||||
_exit_event: Event
|
|
||||||
_exit_exc_info: tuple[
|
|
||||||
type[BaseException] | None, BaseException | None, TracebackType | None
|
|
||||||
] = (None, None, None)
|
|
||||||
|
|
||||||
def __init__(self, async_cm: AsyncContextManager[T_co], portal: BlockingPortal):
|
|
||||||
self._async_cm = async_cm
|
|
||||||
self._portal = portal
|
|
||||||
|
|
||||||
async def run_async_cm(self) -> bool | None:
|
|
||||||
try:
|
|
||||||
self._exit_event = Event()
|
|
||||||
value = await self._async_cm.__aenter__()
|
|
||||||
except BaseException as exc:
|
|
||||||
self._enter_future.set_exception(exc)
|
|
||||||
raise
|
|
||||||
else:
|
|
||||||
self._enter_future.set_result(value)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Wait for the sync context manager to exit.
|
|
||||||
# This next statement can raise `get_cancelled_exc_class()` if
|
|
||||||
# something went wrong in a task group in this async context
|
|
||||||
# manager.
|
|
||||||
await self._exit_event.wait()
|
|
||||||
finally:
|
|
||||||
# In case of cancellation, it could be that we end up here before
|
|
||||||
# `_BlockingAsyncContextManager.__exit__` is called, and an
|
|
||||||
# `_exit_exc_info` has been set.
|
|
||||||
result = await self._async_cm.__aexit__(*self._exit_exc_info)
|
|
||||||
return result
|
|
||||||
|
|
||||||
def __enter__(self) -> T_co:
|
|
||||||
self._enter_future = Future()
|
|
||||||
self._exit_future = self._portal.start_task_soon(self.run_async_cm)
|
|
||||||
cm = self._enter_future.result()
|
|
||||||
return cast(T_co, cm)
|
|
||||||
|
|
||||||
def __exit__(
|
|
||||||
self,
|
|
||||||
__exc_type: type[BaseException] | None,
|
|
||||||
__exc_value: BaseException | None,
|
|
||||||
__traceback: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
self._exit_exc_info = __exc_type, __exc_value, __traceback
|
|
||||||
self._portal.call(self._exit_event.set)
|
|
||||||
return self._exit_future.result()
|
|
||||||
|
|
||||||
|
|
||||||
class _BlockingPortalTaskStatus(TaskStatus):
|
|
||||||
def __init__(self, future: Future):
|
|
||||||
self._future = future
|
|
||||||
|
|
||||||
def started(self, value: object = None) -> None:
|
|
||||||
self._future.set_result(value)
|
|
||||||
|
|
||||||
|
|
||||||
class BlockingPortal:
|
|
||||||
"""An object that lets external threads run code in an asynchronous event loop."""
|
|
||||||
|
|
||||||
def __new__(cls) -> BlockingPortal:
|
|
||||||
return get_asynclib().BlockingPortal()
|
|
||||||
|
|
||||||
def __init__(self) -> None:
|
|
||||||
self._event_loop_thread_id: int | None = threading.get_ident()
|
|
||||||
self._stop_event = Event()
|
|
||||||
self._task_group = create_task_group()
|
|
||||||
self._cancelled_exc_class = get_cancelled_exc_class()
|
|
||||||
|
|
||||||
async def __aenter__(self) -> BlockingPortal:
|
|
||||||
await self._task_group.__aenter__()
|
|
||||||
return self
|
|
||||||
|
|
||||||
async def __aexit__(
|
|
||||||
self,
|
|
||||||
exc_type: type[BaseException] | None,
|
|
||||||
exc_val: BaseException | None,
|
|
||||||
exc_tb: TracebackType | None,
|
|
||||||
) -> bool | None:
|
|
||||||
await self.stop()
|
|
||||||
return await self._task_group.__aexit__(exc_type, exc_val, exc_tb)
|
|
||||||
|
|
||||||
def _check_running(self) -> None:
|
|
||||||
if self._event_loop_thread_id is None:
|
|
||||||
raise RuntimeError("This portal is not running")
|
|
||||||
if self._event_loop_thread_id == threading.get_ident():
|
|
||||||
raise RuntimeError(
|
|
||||||
"This method cannot be called from the event loop thread"
|
|
||||||
)
|
|
||||||
|
|
||||||
async def sleep_until_stopped(self) -> None:
|
|
||||||
"""Sleep until :meth:`stop` is called."""
|
|
||||||
await self._stop_event.wait()
|
|
||||||
|
|
||||||
async def stop(self, cancel_remaining: bool = False) -> None:
|
|
||||||
"""
|
|
||||||
Signal the portal to shut down.
|
|
||||||
|
|
||||||
This marks the portal as no longer accepting new calls and exits from
|
|
||||||
:meth:`sleep_until_stopped`.
|
|
||||||
|
|
||||||
:param cancel_remaining: ``True`` to cancel all the remaining tasks, ``False`` to let them
|
|
||||||
finish before returning
|
|
||||||
|
|
||||||
"""
|
|
||||||
self._event_loop_thread_id = None
|
|
||||||
self._stop_event.set()
|
|
||||||
if cancel_remaining:
|
|
||||||
self._task_group.cancel_scope.cancel()
|
|
||||||
|
|
||||||
async def _call_func(
|
|
||||||
self, func: Callable, args: tuple, kwargs: dict[str, Any], future: Future
|
|
||||||
) -> None:
|
|
||||||
def callback(f: Future) -> None:
|
|
||||||
if f.cancelled() and self._event_loop_thread_id not in (
|
|
||||||
None,
|
|
||||||
threading.get_ident(),
|
|
||||||
):
|
|
||||||
self.call(scope.cancel)
|
|
||||||
|
|
||||||
try:
|
|
||||||
retval = func(*args, **kwargs)
|
|
||||||
if iscoroutine(retval):
|
|
||||||
with CancelScope() as scope:
|
|
||||||
if future.cancelled():
|
|
||||||
scope.cancel()
|
|
||||||
else:
|
|
||||||
future.add_done_callback(callback)
|
|
||||||
|
|
||||||
retval = await retval
|
|
||||||
except self._cancelled_exc_class:
|
|
||||||
future.cancel()
|
|
||||||
except BaseException as exc:
|
|
||||||
if not future.cancelled():
|
|
||||||
future.set_exception(exc)
|
|
||||||
|
|
||||||
# Let base exceptions fall through
|
|
||||||
if not isinstance(exc, Exception):
|
|
||||||
raise
|
|
||||||
else:
|
|
||||||
if not future.cancelled():
|
|
||||||
future.set_result(retval)
|
|
||||||
finally:
|
|
||||||
scope = None # type: ignore[assignment]
|
|
||||||
|
|
||||||
def _spawn_task_from_thread(
|
|
||||||
self,
|
|
||||||
func: Callable,
|
|
||||||
args: tuple,
|
|
||||||
kwargs: dict[str, Any],
|
|
||||||
name: object,
|
|
||||||
future: Future,
|
|
||||||
) -> None:
|
|
||||||
"""
|
|
||||||
Spawn a new task using the given callable.
|
|
||||||
|
|
||||||
Implementors must ensure that the future is resolved when the task finishes.
|
|
||||||
|
|
||||||
:param func: a callable
|
|
||||||
:param args: positional arguments to be passed to the callable
|
|
||||||
:param kwargs: keyword arguments to be passed to the callable
|
|
||||||
:param name: name of the task (will be coerced to a string if not ``None``)
|
|
||||||
:param future: a future that will resolve to the return value of the callable, or the
|
|
||||||
exception raised during its execution
|
|
||||||
|
|
||||||
"""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def call(self, func: Callable[..., Awaitable[T_Retval]], *args: object) -> T_Retval:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def call(self, func: Callable[..., T_Retval], *args: object) -> T_Retval:
|
|
||||||
...
|
|
||||||
|
|
||||||
def call(
|
|
||||||
self, func: Callable[..., Awaitable[T_Retval] | T_Retval], *args: object
|
|
||||||
) -> T_Retval:
|
|
||||||
"""
|
|
||||||
Call the given function in the event loop thread.
|
|
||||||
|
|
||||||
If the callable returns a coroutine object, it is awaited on.
|
|
||||||
|
|
||||||
:param func: any callable
|
|
||||||
:raises RuntimeError: if the portal is not running or if this method is called from within
|
|
||||||
the event loop thread
|
|
||||||
|
|
||||||
"""
|
|
||||||
return cast(T_Retval, self.start_task_soon(func, *args).result())
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def spawn_task(
|
|
||||||
self,
|
|
||||||
func: Callable[..., Awaitable[T_Retval]],
|
|
||||||
*args: object,
|
|
||||||
name: object = None,
|
|
||||||
) -> Future[T_Retval]:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def spawn_task(
|
|
||||||
self, func: Callable[..., T_Retval], *args: object, name: object = None
|
|
||||||
) -> Future[T_Retval]:
|
|
||||||
...
|
|
||||||
|
|
||||||
def spawn_task(
|
|
||||||
self,
|
|
||||||
func: Callable[..., Awaitable[T_Retval] | T_Retval],
|
|
||||||
*args: object,
|
|
||||||
name: object = None,
|
|
||||||
) -> Future[T_Retval]:
|
|
||||||
"""
|
|
||||||
Start a task in the portal's task group.
|
|
||||||
|
|
||||||
:param func: the target coroutine function
|
|
||||||
:param args: positional arguments passed to ``func``
|
|
||||||
:param name: name of the task (will be coerced to a string if not ``None``)
|
|
||||||
:return: a future that resolves with the return value of the callable if the task completes
|
|
||||||
successfully, or with the exception raised in the task
|
|
||||||
:raises RuntimeError: if the portal is not running or if this method is called from within
|
|
||||||
the event loop thread
|
|
||||||
|
|
||||||
.. versionadded:: 2.1
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :meth:`start_task_soon` instead. If your code needs AnyIO 2 compatibility, you
|
|
||||||
can keep using this until AnyIO 4.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn(
|
|
||||||
"spawn_task() is deprecated -- use start_task_soon() instead",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return self.start_task_soon(func, *args, name=name) # type: ignore[arg-type]
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def start_task_soon(
|
|
||||||
self,
|
|
||||||
func: Callable[..., Awaitable[T_Retval]],
|
|
||||||
*args: object,
|
|
||||||
name: object = None,
|
|
||||||
) -> Future[T_Retval]:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def start_task_soon(
|
|
||||||
self, func: Callable[..., T_Retval], *args: object, name: object = None
|
|
||||||
) -> Future[T_Retval]:
|
|
||||||
...
|
|
||||||
|
|
||||||
def start_task_soon(
|
|
||||||
self,
|
|
||||||
func: Callable[..., Awaitable[T_Retval] | T_Retval],
|
|
||||||
*args: object,
|
|
||||||
name: object = None,
|
|
||||||
) -> Future[T_Retval]:
|
|
||||||
"""
|
|
||||||
Start a task in the portal's task group.
|
|
||||||
|
|
||||||
The task will be run inside a cancel scope which can be cancelled by cancelling the
|
|
||||||
returned future.
|
|
||||||
|
|
||||||
:param func: the target function
|
|
||||||
:param args: positional arguments passed to ``func``
|
|
||||||
:param name: name of the task (will be coerced to a string if not ``None``)
|
|
||||||
:return: a future that resolves with the return value of the callable if the
|
|
||||||
task completes successfully, or with the exception raised in the task
|
|
||||||
:raises RuntimeError: if the portal is not running or if this method is called
|
|
||||||
from within the event loop thread
|
|
||||||
:rtype: concurrent.futures.Future[T_Retval]
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
|
|
||||||
"""
|
|
||||||
self._check_running()
|
|
||||||
f: Future = Future()
|
|
||||||
self._spawn_task_from_thread(func, args, {}, name, f)
|
|
||||||
return f
|
|
||||||
|
|
||||||
def start_task(
|
|
||||||
self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None
|
|
||||||
) -> tuple[Future[Any], Any]:
|
|
||||||
"""
|
|
||||||
Start a task in the portal's task group and wait until it signals for readiness.
|
|
||||||
|
|
||||||
This method works the same way as :meth:`.abc.TaskGroup.start`.
|
|
||||||
|
|
||||||
:param func: the target function
|
|
||||||
:param args: positional arguments passed to ``func``
|
|
||||||
:param name: name of the task (will be coerced to a string if not ``None``)
|
|
||||||
:return: a tuple of (future, task_status_value) where the ``task_status_value``
|
|
||||||
is the value passed to ``task_status.started()`` from within the target
|
|
||||||
function
|
|
||||||
:rtype: tuple[concurrent.futures.Future[Any], Any]
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def task_done(future: Future) -> None:
|
|
||||||
if not task_status_future.done():
|
|
||||||
if future.cancelled():
|
|
||||||
task_status_future.cancel()
|
|
||||||
elif future.exception():
|
|
||||||
task_status_future.set_exception(future.exception())
|
|
||||||
else:
|
|
||||||
exc = RuntimeError(
|
|
||||||
"Task exited without calling task_status.started()"
|
|
||||||
)
|
|
||||||
task_status_future.set_exception(exc)
|
|
||||||
|
|
||||||
self._check_running()
|
|
||||||
task_status_future: Future = Future()
|
|
||||||
task_status = _BlockingPortalTaskStatus(task_status_future)
|
|
||||||
f: Future = Future()
|
|
||||||
f.add_done_callback(task_done)
|
|
||||||
self._spawn_task_from_thread(func, args, {"task_status": task_status}, name, f)
|
|
||||||
return f, task_status_future.result()
|
|
||||||
|
|
||||||
def wrap_async_context_manager(
|
|
||||||
self, cm: AsyncContextManager[T_co]
|
|
||||||
) -> ContextManager[T_co]:
|
|
||||||
"""
|
|
||||||
Wrap an async context manager as a synchronous context manager via this portal.
|
|
||||||
|
|
||||||
Spawns a task that will call both ``__aenter__()`` and ``__aexit__()``, stopping in the
|
|
||||||
middle until the synchronous context manager exits.
|
|
||||||
|
|
||||||
:param cm: an asynchronous context manager
|
|
||||||
:return: a synchronous context manager
|
|
||||||
|
|
||||||
.. versionadded:: 2.1
|
|
||||||
|
|
||||||
"""
|
|
||||||
return _BlockingAsyncContextManager(cm, self)
|
|
||||||
|
|
||||||
|
|
||||||
def create_blocking_portal() -> BlockingPortal:
|
|
||||||
"""
|
|
||||||
Create a portal for running functions in the event loop thread from external threads.
|
|
||||||
|
|
||||||
Use this function in asynchronous code when you need to allow external threads access to the
|
|
||||||
event loop where your asynchronous code is currently running.
|
|
||||||
|
|
||||||
.. deprecated:: 3.0
|
|
||||||
Use :class:`.BlockingPortal` directly.
|
|
||||||
|
|
||||||
"""
|
|
||||||
warn(
|
|
||||||
"create_blocking_portal() has been deprecated -- use anyio.from_thread.BlockingPortal() "
|
|
||||||
"directly",
|
|
||||||
DeprecationWarning,
|
|
||||||
)
|
|
||||||
return BlockingPortal()
|
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
|
||||||
def start_blocking_portal(
|
|
||||||
backend: str = "asyncio", backend_options: dict[str, Any] | None = None
|
|
||||||
) -> Generator[BlockingPortal, Any, None]:
|
|
||||||
"""
|
|
||||||
Start a new event loop in a new thread and run a blocking portal in its main task.
|
|
||||||
|
|
||||||
The parameters are the same as for :func:`~anyio.run`.
|
|
||||||
|
|
||||||
:param backend: name of the backend
|
|
||||||
:param backend_options: backend options
|
|
||||||
:return: a context manager that yields a blocking portal
|
|
||||||
|
|
||||||
.. versionchanged:: 3.0
|
|
||||||
Usage as a context manager is now required.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
async def run_portal() -> None:
|
|
||||||
async with BlockingPortal() as portal_:
|
|
||||||
if future.set_running_or_notify_cancel():
|
|
||||||
future.set_result(portal_)
|
|
||||||
await portal_.sleep_until_stopped()
|
|
||||||
|
|
||||||
future: Future[BlockingPortal] = Future()
|
|
||||||
with ThreadPoolExecutor(1) as executor:
|
|
||||||
run_future = executor.submit(
|
|
||||||
_eventloop.run,
|
|
||||||
run_portal, # type: ignore[arg-type]
|
|
||||||
backend=backend,
|
|
||||||
backend_options=backend_options,
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
wait(
|
|
||||||
cast(Iterable[Future], [run_future, future]),
|
|
||||||
return_when=FIRST_COMPLETED,
|
|
||||||
)
|
|
||||||
except BaseException:
|
|
||||||
future.cancel()
|
|
||||||
run_future.cancel()
|
|
||||||
raise
|
|
||||||
|
|
||||||
if future.done():
|
|
||||||
portal = future.result()
|
|
||||||
cancel_remaining_tasks = False
|
|
||||||
try:
|
|
||||||
yield portal
|
|
||||||
except BaseException:
|
|
||||||
cancel_remaining_tasks = True
|
|
||||||
raise
|
|
||||||
finally:
|
|
||||||
try:
|
|
||||||
portal.call(portal.stop, cancel_remaining_tasks)
|
|
||||||
except RuntimeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
run_future.result()
|
|
||||||
@@ -1,174 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import enum
|
|
||||||
import sys
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from typing import Any, Generic, TypeVar, overload
|
|
||||||
from weakref import WeakKeyDictionary
|
|
||||||
|
|
||||||
from ._core._eventloop import get_asynclib
|
|
||||||
|
|
||||||
if sys.version_info >= (3, 8):
|
|
||||||
from typing import Literal
|
|
||||||
else:
|
|
||||||
from typing_extensions import Literal
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
D = TypeVar("D")
|
|
||||||
|
|
||||||
|
|
||||||
async def checkpoint() -> None:
|
|
||||||
"""
|
|
||||||
Check for cancellation and allow the scheduler to switch to another task.
|
|
||||||
|
|
||||||
Equivalent to (but more efficient than)::
|
|
||||||
|
|
||||||
await checkpoint_if_cancelled()
|
|
||||||
await cancel_shielded_checkpoint()
|
|
||||||
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
|
|
||||||
"""
|
|
||||||
await get_asynclib().checkpoint()
|
|
||||||
|
|
||||||
|
|
||||||
async def checkpoint_if_cancelled() -> None:
|
|
||||||
"""
|
|
||||||
Enter a checkpoint if the enclosing cancel scope has been cancelled.
|
|
||||||
|
|
||||||
This does not allow the scheduler to switch to a different task.
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
|
|
||||||
"""
|
|
||||||
await get_asynclib().checkpoint_if_cancelled()
|
|
||||||
|
|
||||||
|
|
||||||
async def cancel_shielded_checkpoint() -> None:
|
|
||||||
"""
|
|
||||||
Allow the scheduler to switch to another task but without checking for cancellation.
|
|
||||||
|
|
||||||
Equivalent to (but potentially more efficient than)::
|
|
||||||
|
|
||||||
with CancelScope(shield=True):
|
|
||||||
await checkpoint()
|
|
||||||
|
|
||||||
|
|
||||||
.. versionadded:: 3.0
|
|
||||||
|
|
||||||
"""
|
|
||||||
await get_asynclib().cancel_shielded_checkpoint()
|
|
||||||
|
|
||||||
|
|
||||||
def current_token() -> object:
|
|
||||||
"""Return a backend specific token object that can be used to get back to the event loop."""
|
|
||||||
return get_asynclib().current_token()
|
|
||||||
|
|
||||||
|
|
||||||
_run_vars: WeakKeyDictionary[Any, dict[str, Any]] = WeakKeyDictionary()
|
|
||||||
_token_wrappers: dict[Any, _TokenWrapper] = {}
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass(frozen=True)
|
|
||||||
class _TokenWrapper:
|
|
||||||
__slots__ = "_token", "__weakref__"
|
|
||||||
_token: object
|
|
||||||
|
|
||||||
|
|
||||||
class _NoValueSet(enum.Enum):
|
|
||||||
NO_VALUE_SET = enum.auto()
|
|
||||||
|
|
||||||
|
|
||||||
class RunvarToken(Generic[T]):
|
|
||||||
__slots__ = "_var", "_value", "_redeemed"
|
|
||||||
|
|
||||||
def __init__(self, var: RunVar[T], value: T | Literal[_NoValueSet.NO_VALUE_SET]):
|
|
||||||
self._var = var
|
|
||||||
self._value: T | Literal[_NoValueSet.NO_VALUE_SET] = value
|
|
||||||
self._redeemed = False
|
|
||||||
|
|
||||||
|
|
||||||
class RunVar(Generic[T]):
|
|
||||||
"""
|
|
||||||
Like a :class:`~contextvars.ContextVar`, except scoped to the running event loop.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__slots__ = "_name", "_default"
|
|
||||||
|
|
||||||
NO_VALUE_SET: Literal[_NoValueSet.NO_VALUE_SET] = _NoValueSet.NO_VALUE_SET
|
|
||||||
|
|
||||||
_token_wrappers: set[_TokenWrapper] = set()
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
name: str,
|
|
||||||
default: T | Literal[_NoValueSet.NO_VALUE_SET] = NO_VALUE_SET,
|
|
||||||
):
|
|
||||||
self._name = name
|
|
||||||
self._default = default
|
|
||||||
|
|
||||||
@property
|
|
||||||
def _current_vars(self) -> dict[str, T]:
|
|
||||||
token = current_token()
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
return _run_vars[token]
|
|
||||||
except TypeError:
|
|
||||||
# Happens when token isn't weak referable (TrioToken).
|
|
||||||
# This workaround does mean that some memory will leak on Trio until the problem
|
|
||||||
# is fixed on their end.
|
|
||||||
token = _TokenWrapper(token)
|
|
||||||
self._token_wrappers.add(token)
|
|
||||||
except KeyError:
|
|
||||||
run_vars = _run_vars[token] = {}
|
|
||||||
return run_vars
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def get(self, default: D) -> T | D:
|
|
||||||
...
|
|
||||||
|
|
||||||
@overload
|
|
||||||
def get(self) -> T:
|
|
||||||
...
|
|
||||||
|
|
||||||
def get(
|
|
||||||
self, default: D | Literal[_NoValueSet.NO_VALUE_SET] = NO_VALUE_SET
|
|
||||||
) -> T | D:
|
|
||||||
try:
|
|
||||||
return self._current_vars[self._name]
|
|
||||||
except KeyError:
|
|
||||||
if default is not RunVar.NO_VALUE_SET:
|
|
||||||
return default
|
|
||||||
elif self._default is not RunVar.NO_VALUE_SET:
|
|
||||||
return self._default
|
|
||||||
|
|
||||||
raise LookupError(
|
|
||||||
f'Run variable "{self._name}" has no value and no default set'
|
|
||||||
)
|
|
||||||
|
|
||||||
def set(self, value: T) -> RunvarToken[T]:
|
|
||||||
current_vars = self._current_vars
|
|
||||||
token = RunvarToken(self, current_vars.get(self._name, RunVar.NO_VALUE_SET))
|
|
||||||
current_vars[self._name] = value
|
|
||||||
return token
|
|
||||||
|
|
||||||
def reset(self, token: RunvarToken[T]) -> None:
|
|
||||||
if token._var is not self:
|
|
||||||
raise ValueError("This token does not belong to this RunVar")
|
|
||||||
|
|
||||||
if token._redeemed:
|
|
||||||
raise ValueError("This token has already been used")
|
|
||||||
|
|
||||||
if token._value is _NoValueSet.NO_VALUE_SET:
|
|
||||||
try:
|
|
||||||
del self._current_vars[self._name]
|
|
||||||
except KeyError:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
self._current_vars[self._name] = token._value
|
|
||||||
|
|
||||||
token._redeemed = True
|
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
|
||||||
return f"<RunVar name={self._name!r}>"
|
|
||||||
@@ -1,142 +0,0 @@
|
|||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from contextlib import contextmanager
|
|
||||||
from inspect import isasyncgenfunction, iscoroutinefunction
|
|
||||||
from typing import Any, Dict, Generator, Tuple, cast
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
import sniffio
|
|
||||||
|
|
||||||
from ._core._eventloop import get_all_backends, get_asynclib
|
|
||||||
from .abc import TestRunner
|
|
||||||
|
|
||||||
_current_runner: TestRunner | None = None
|
|
||||||
|
|
||||||
|
|
||||||
def extract_backend_and_options(backend: object) -> tuple[str, dict[str, Any]]:
|
|
||||||
if isinstance(backend, str):
|
|
||||||
return backend, {}
|
|
||||||
elif isinstance(backend, tuple) and len(backend) == 2:
|
|
||||||
if isinstance(backend[0], str) and isinstance(backend[1], dict):
|
|
||||||
return cast(Tuple[str, Dict[str, Any]], backend)
|
|
||||||
|
|
||||||
raise TypeError("anyio_backend must be either a string or tuple of (string, dict)")
|
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
|
||||||
def get_runner(
|
|
||||||
backend_name: str, backend_options: dict[str, Any]
|
|
||||||
) -> Generator[TestRunner, object, None]:
|
|
||||||
global _current_runner
|
|
||||||
if _current_runner:
|
|
||||||
yield _current_runner
|
|
||||||
return
|
|
||||||
|
|
||||||
asynclib = get_asynclib(backend_name)
|
|
||||||
token = None
|
|
||||||
if sniffio.current_async_library_cvar.get(None) is None:
|
|
||||||
# Since we're in control of the event loop, we can cache the name of the async library
|
|
||||||
token = sniffio.current_async_library_cvar.set(backend_name)
|
|
||||||
|
|
||||||
try:
|
|
||||||
backend_options = backend_options or {}
|
|
||||||
with asynclib.TestRunner(**backend_options) as runner:
|
|
||||||
_current_runner = runner
|
|
||||||
yield runner
|
|
||||||
finally:
|
|
||||||
_current_runner = None
|
|
||||||
if token:
|
|
||||||
sniffio.current_async_library_cvar.reset(token)
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_configure(config: Any) -> None:
|
|
||||||
config.addinivalue_line(
|
|
||||||
"markers",
|
|
||||||
"anyio: mark the (coroutine function) test to be run "
|
|
||||||
"asynchronously via anyio.",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_fixture_setup(fixturedef: Any, request: Any) -> None:
|
|
||||||
def wrapper(*args, anyio_backend, **kwargs): # type: ignore[no-untyped-def]
|
|
||||||
backend_name, backend_options = extract_backend_and_options(anyio_backend)
|
|
||||||
if has_backend_arg:
|
|
||||||
kwargs["anyio_backend"] = anyio_backend
|
|
||||||
|
|
||||||
with get_runner(backend_name, backend_options) as runner:
|
|
||||||
if isasyncgenfunction(func):
|
|
||||||
yield from runner.run_asyncgen_fixture(func, kwargs)
|
|
||||||
else:
|
|
||||||
yield runner.run_fixture(func, kwargs)
|
|
||||||
|
|
||||||
# Only apply this to coroutine functions and async generator functions in requests that involve
|
|
||||||
# the anyio_backend fixture
|
|
||||||
func = fixturedef.func
|
|
||||||
if isasyncgenfunction(func) or iscoroutinefunction(func):
|
|
||||||
if "anyio_backend" in request.fixturenames:
|
|
||||||
has_backend_arg = "anyio_backend" in fixturedef.argnames
|
|
||||||
fixturedef.func = wrapper
|
|
||||||
if not has_backend_arg:
|
|
||||||
fixturedef.argnames += ("anyio_backend",)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.hookimpl(tryfirst=True)
|
|
||||||
def pytest_pycollect_makeitem(collector: Any, name: Any, obj: Any) -> None:
|
|
||||||
if collector.istestfunction(obj, name):
|
|
||||||
inner_func = obj.hypothesis.inner_test if hasattr(obj, "hypothesis") else obj
|
|
||||||
if iscoroutinefunction(inner_func):
|
|
||||||
marker = collector.get_closest_marker("anyio")
|
|
||||||
own_markers = getattr(obj, "pytestmark", ())
|
|
||||||
if marker or any(marker.name == "anyio" for marker in own_markers):
|
|
||||||
pytest.mark.usefixtures("anyio_backend")(obj)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.hookimpl(tryfirst=True)
|
|
||||||
def pytest_pyfunc_call(pyfuncitem: Any) -> bool | None:
|
|
||||||
def run_with_hypothesis(**kwargs: Any) -> None:
|
|
||||||
with get_runner(backend_name, backend_options) as runner:
|
|
||||||
runner.run_test(original_func, kwargs)
|
|
||||||
|
|
||||||
backend = pyfuncitem.funcargs.get("anyio_backend")
|
|
||||||
if backend:
|
|
||||||
backend_name, backend_options = extract_backend_and_options(backend)
|
|
||||||
|
|
||||||
if hasattr(pyfuncitem.obj, "hypothesis"):
|
|
||||||
# Wrap the inner test function unless it's already wrapped
|
|
||||||
original_func = pyfuncitem.obj.hypothesis.inner_test
|
|
||||||
if original_func.__qualname__ != run_with_hypothesis.__qualname__:
|
|
||||||
if iscoroutinefunction(original_func):
|
|
||||||
pyfuncitem.obj.hypothesis.inner_test = run_with_hypothesis
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
if iscoroutinefunction(pyfuncitem.obj):
|
|
||||||
funcargs = pyfuncitem.funcargs
|
|
||||||
testargs = {arg: funcargs[arg] for arg in pyfuncitem._fixtureinfo.argnames}
|
|
||||||
with get_runner(backend_name, backend_options) as runner:
|
|
||||||
runner.run_test(pyfuncitem.obj, testargs)
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(params=get_all_backends())
|
|
||||||
def anyio_backend(request: Any) -> Any:
|
|
||||||
return request.param
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def anyio_backend_name(anyio_backend: Any) -> str:
|
|
||||||
if isinstance(anyio_backend, str):
|
|
||||||
return anyio_backend
|
|
||||||
else:
|
|
||||||
return anyio_backend[0]
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def anyio_backend_options(anyio_backend: Any) -> dict[str, Any]:
|
|
||||||
if isinstance(anyio_backend, str):
|
|
||||||
return {}
|
|
||||||
else:
|
|
||||||
return anyio_backend[1]
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user