Compare commits
21 Commits
mara
...
noe-bailly
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
23091bd7ff | ||
|
|
5d5d48e13e | ||
|
|
d17efbb2f2 | ||
| f77082d6cf | |||
| 9ae58a0f4f | |||
| bf12d1e5d4 | |||
| 278eee7246 | |||
|
|
2bb42ab8f6 | ||
|
|
7d55d258ac | ||
|
|
a09846193d | ||
|
|
20c041878d | ||
|
|
ae8cf7247d | ||
|
|
342a45a4f2 | ||
|
|
b09af3ea95 | ||
|
|
79fa977d72 | ||
|
|
6304fb4def | ||
|
|
138e6b30d7 | ||
|
|
4e5642a83b | ||
|
|
e13b25bbcd | ||
|
|
ad3a364347 | ||
|
|
7ef8f2ffd5 |
10
.gitignore
vendored
Normal file
10
.gitignore
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
# Environments
|
||||
venv/
|
||||
.venv/
|
||||
/pyvenv.cfg
|
||||
.python-version
|
||||
|
||||
# Media
|
||||
media/
|
||||
downloaded_images
|
||||
downloaded_videos
|
||||
44
README.md
44
README.md
@@ -6,11 +6,49 @@ test port ssh
|
||||
|
||||
## how to
|
||||
1. clone this repo to you own computer
|
||||
` git clone https://git.erg.school/P039/art_num_2024.git`
|
||||
```
|
||||
git clone https://git.erg.school/P039/art_num_2024.git
|
||||
```
|
||||
2. check before eatch courses for update
|
||||
` git pull `
|
||||
```
|
||||
git pull
|
||||
```
|
||||
3. create your own branch
|
||||
```
|
||||
git checkout -b prenom-nom
|
||||
```
|
||||
4. create and update README with your name
|
||||
```
|
||||
nano README
|
||||
```
|
||||
5. save the file with `ctrl+X`
|
||||
6. check git tracking
|
||||
```
|
||||
git status
|
||||
```
|
||||
7. add and commit the change in the git
|
||||
```
|
||||
git add README.md
|
||||
git commit -m 'change README'
|
||||
```
|
||||
8. check the commit was registered
|
||||
```
|
||||
git log
|
||||
git show <commit-hash>
|
||||
```
|
||||
9. push your branch on gitea
|
||||
```
|
||||
git push origin prenom-nom
|
||||
```
|
||||
10. update your branch with latest main branch
|
||||
```
|
||||
git checkout main
|
||||
git pull origin main
|
||||
git checkout prenom-nom
|
||||
git rebase main
|
||||
git log
|
||||
```
|
||||
|
||||
## content
|
||||
test
|
||||
|
||||
|
||||
|
||||
2
python/EXERCISE.md
Normal file
2
python/EXERCISE.md
Normal file
@@ -0,0 +1,2 @@
|
||||
L'étudiant.e s'engage dans une recherche créative en faisant un commentaire, une réflexion ou un detournement des plateformes culturelles, ou des débats politiques, qui sont médiatisés numériquement (par example en ligne). Cet exercice doit avoir un élément artistique et faire appel à Python en tout, ou en partie dans son exécution. Vous pouvez consulter le matériel pertinent de la classe python, tel que les diapositives de présentation, les exemples de code et les références de tutoriels, à l'adresse suivante:
|
||||
https://git.erg.school/P039/art_num_2024/src/branch/main/python
|
||||
@@ -1 +1,37 @@
|
||||
git repo for art num
|
||||
## Python course
|
||||
|
||||
### install pip3 or pip
|
||||
Windows/Linux: [Installation](https://pip.pypa.io/en/stable/installation/)
|
||||
MacOS: [Install Homebrew](https://brew.sh/).
|
||||
After installation you may need to add homebrew to your PATH. Exectute the code, line by line, from the messages at the end of the homebrew installation in your terminal. You may also read how to do it [here](https://usercomp.com/news/1130946/add-homebrew-to-path).
|
||||
Verify brew is installed with `brew --version`. If not errors, then install pip with `brew install pip`.
|
||||
|
||||
Depending on your system, you may have pip or pip3. Execute `pip --version` or `pip3 --version` and see which one works for you.
|
||||
|
||||
### virtualenv
|
||||
Windows/Linux: [Installation](https://virtualenv.pypa.io/en/latest/installation.html)
|
||||
MacOS: Execute `brew install virtualenv`
|
||||
How to use it for all systems -> [User Guide](https://virtualenv.pypa.io/en/latest/user_guide.html)
|
||||
|
||||
### videogrep
|
||||
tutorial: https://lav.io/notes/videogrep-tutorial/
|
||||
code: https://github.com/antiboredom/videogrep/tree/master
|
||||
videos: https://antiboredom.github.io/videogrep/
|
||||
|
||||
|
||||
### python tutorials here
|
||||
[python basics](./introduction/README.md)
|
||||
[python scraping](./scrape/README.md)
|
||||
|
||||
### python tutorials online
|
||||
RealPython => need to create free account
|
||||
[Work with python from the terminal or with a code editor](https://realpython.com/interacting-with-python/)
|
||||
[Variables](https://realpython.com/python-variables/)
|
||||
[User input from terminal/keyboard](https://realpython.com/python-keyboard-input/)
|
||||
[if/ elif/ else](https://realpython.com/courses/python-conditional-statements/)
|
||||
[Adding strings together](https://realpython.com/python-string-concatenation/)
|
||||
[Data types](https://realpython.com/python-data-types/)
|
||||
|
||||
|
||||
### artisitc references
|
||||
[presentation - slides](./presentation-en.pdf)
|
||||
|
||||
BIN
python/audio/opensource.mp4
Normal file
BIN
python/audio/opensource.mp4
Normal file
Binary file not shown.
BIN
python/audio/opensourcecut.mp4
Normal file
BIN
python/audio/opensourcecut.mp4
Normal file
Binary file not shown.
14
python/introduction/README.md
Normal file
14
python/introduction/README.md
Normal file
@@ -0,0 +1,14 @@
|
||||
## introduction
|
||||
|
||||
```
|
||||
For variables --> see all scripts
|
||||
Working with lists --> see bisous.py programming.py
|
||||
For user input from terminal or keyboard --> see bisous.py
|
||||
print
|
||||
for loop --> see bisous.py
|
||||
conditional statements if/else/elif --> see bisous.py
|
||||
break --> bisous.py
|
||||
while loop --> see missing.py
|
||||
function --> missing.py
|
||||
random --> programming.py
|
||||
```
|
||||
19
python/introduction/bisous.py
Normal file
19
python/introduction/bisous.py
Normal file
@@ -0,0 +1,19 @@
|
||||
# Initialiser les variables
|
||||
queer = "mon amour"
|
||||
bisous = ["ma biche", "mom bébé", "mon amour", "mon chéri.e"]
|
||||
|
||||
# Demander une saisie à l'utilisateur
|
||||
amoureuxse = input("Entrez le nom de votre bien-aiméx : ")
|
||||
|
||||
# Boucler à travers la liste et imprimer le message correspondant
|
||||
for bisou in bisous:
|
||||
if bisou == queer:
|
||||
print("bisou pour toi", bisou, amoureuxse)
|
||||
|
||||
elif amoureuxse == "python":
|
||||
print("on dirait un.e geek")
|
||||
break
|
||||
|
||||
else:
|
||||
print(f":* :* {bisou}, {amoureuxse}")
|
||||
|
||||
12
python/introduction/missing.py
Normal file
12
python/introduction/missing.py
Normal file
@@ -0,0 +1,12 @@
|
||||
from time import sleep
|
||||
|
||||
love = True
|
||||
how = "so"
|
||||
|
||||
def missing(so):
|
||||
print(f"I miss you {so} much")
|
||||
|
||||
while love:
|
||||
missing(how)
|
||||
how += " so"
|
||||
sleep(0.2)
|
||||
25
python/introduction/programming.py
Normal file
25
python/introduction/programming.py
Normal file
@@ -0,0 +1,25 @@
|
||||
"""
|
||||
poem converted from bash programming.sh by Winnie Soon, modified from The House of Dust, 1967 Alison Knowles and James Tenney
|
||||
"""
|
||||
import random
|
||||
import time
|
||||
|
||||
# listes for different elements
|
||||
kisses = ["DEAREST", "SWEETHEART", "WORLD", "DARLING", "BABY", "LOVE", "MONKEY", "SUGAR", "LITTLE PRINCE"]
|
||||
material = ["SAND", "DUST", "LEAVES", "PAPER", "TIN", "ROOTS", "BRICK", "STONE", "DISCARDED CLOTHING", "GLASS", "STEEL", "PLASTIC", "MUD", "BROKEN DISHES", "WOOD", "STRAW", "WEEDS", "FOREST"]
|
||||
location = ["IN A GREEN, MOSSY TERRAIN", "IN AN OVERPOPULATED AREA", "BY THE SEA", "BY AN ABANDONED LAKE", "IN A DESERTED FACTORY", "IN DENSE WOODS", "IN JAPAN", "AMONG SMALL HILLS", "IN SOUTHERN FRANCE", "AMONG HIGH MOUNTAINS", "ON AN ISLAND", "IN A COLD, WINDY CLIMATE", "IN A PLACE WITH BOTH HEAVY RAIN AND BRIGHT SUN", "IN A DESERTED AIRPORT", "IN A HOT CLIMATE", "INSIDE A MOUNTAIN", "ON THE SEA", "IN MICHIGAN", "IN HEAVY JUNGLE UNDERGROWTH", "BY A RIVER", "AMONG OTHER HOUSES", "IN A DESERTED CHURCH", "IN A METROPOLIS", "UNDERWATER", "ON THE SCREEN", "ON THE ROAD"]
|
||||
light_source = ["CANDLES", "ALL AVAILABLE LIGHTING", "ELECTRICITY", "NATURAL LIGHT", "LEDS", "MOON LIGHT", "THE SMALL TORCH"]
|
||||
inhabitants = ["PEOPLE WHO SLEEP VERY LITTLE", "VEGETARIANS", "HORSES AND BIRDS", "PEOPLE SPEAKING MANY LANGUAGES WEARING LITTLE OR NO CLOTHING", "CHILDREN AND OLD PEOPLE", "VARIOUS BIRDS AND FISH", "LOVERS", "PEOPLE WHO ENJOY EATING TOGETHER", "PEOPLE WHO EAT A GREAT DEAL", "COLLECTORS OF ALL TYPES", "FRIENDS AND ENEMIES", "PEOPLE WHO SLEEP ALMOST ALL THE TIME", "VERY TALL PEOPLE", "AMERICAN INDIANS", "LITTLE BOYS", "PEOPLE FROM MANY WALKS OF LIFE", "FRIENDS", "FRENCH AND GERMAN SPEAKING PEOPLE", "FISHERMEN AND FAMILIES", "PEOPLE WHO LOVE TO READ", "CHEERFUL KIDS", "QUEER LOVERS", "NAUGHTY MONKEYS", "KIDDOS"]
|
||||
|
||||
# Infinite loop
|
||||
while True:
|
||||
print("HELLO", random.choice(kisses))
|
||||
print(" A TERMINAL OF BLACK", random.choice(material))
|
||||
print(" ", random.choice(location))
|
||||
print(" PROGRAMMING", random.choice(light_source))
|
||||
print(" KISSED BY", random.choice(inhabitants))
|
||||
print(" ")
|
||||
|
||||
# Delay for 3.5 seconds
|
||||
time.sleep(3.5)
|
||||
|
||||
0
python/machine_learning/__init__.py
Normal file
0
python/machine_learning/__init__.py
Normal file
1
python/machine_learning/rnn-test.py
Normal file
1
python/machine_learning/rnn-test.py
Normal file
@@ -0,0 +1 @@
|
||||
#copy code here
|
||||
1
python/machine_learning/rnn.py
Normal file
1
python/machine_learning/rnn.py
Normal file
@@ -0,0 +1 @@
|
||||
# copy here your code
|
||||
BIN
python/presentation-en.pdf
Normal file
BIN
python/presentation-en.pdf
Normal file
Binary file not shown.
24
python/scrape/README.md
Normal file
24
python/scrape/README.md
Normal file
@@ -0,0 +1,24 @@
|
||||
## Un script qui extrait des images depuis une URL donnée
|
||||
|
||||
Nous devons installer:
|
||||
```
|
||||
pip install requests beautifulsoup4 tldextract
|
||||
|
||||
```
|
||||
|
||||
Exécutez le script avec :
|
||||
```
|
||||
python get_images.py https://www.freepik.com/images
|
||||
```
|
||||
Remplacez l’URL par le lien que vous souhaitez extraire.
|
||||
**Remarque:** Le scraping doit être effectué de manière éthique, en respectant les règles du fichier robots.txt et les conditions d'utilisation du site.
|
||||
|
||||
### Beautiful soup
|
||||
[INSTALL](https://beautiful-soup-4.readthedocs.io/en/latest/#installing-beautiful-soup)
|
||||
[HOWTO](https://beautiful-soup-4.readthedocs.io/en/latest/#making-the-soup)
|
||||
[WORK with HTML TAG](https://beautiful-soup-4.readthedocs.io/en/latest/#navigating-the-tree)
|
||||
|
||||
### Collage avec des images
|
||||
[Make video with all same size images](https://pythonexamples.org/python-opencv-cv2-create-video-from-images/)
|
||||
|
||||
[Change images to same size then make a video](https://www.geeksforgeeks.org/python-create-video-using-multiple-images-using-opencv/)
|
||||
78
python/scrape/get_images.py
Normal file
78
python/scrape/get_images.py
Normal file
@@ -0,0 +1,78 @@
|
||||
import random
|
||||
import urllib
|
||||
import requests
|
||||
import time
|
||||
from bs4 import BeautifulSoup
|
||||
from urllib.parse import urlparse
|
||||
import os
|
||||
import sys
|
||||
import tldextract
|
||||
|
||||
# URL of the webpage with images
|
||||
input_url = sys.argv[1]
|
||||
|
||||
# extract full domain
|
||||
def split_domain_or_subdomain_and_path(url):
|
||||
# Parse the URL
|
||||
parsed_url = urlparse(url)
|
||||
extracted = tldextract.extract(url)
|
||||
|
||||
# Build the full domain, including subdomain if present
|
||||
if extracted.subdomain:
|
||||
full_domain = f"{extracted.subdomain}.{extracted.domain}.{extracted.suffix}"
|
||||
else:
|
||||
full_domain = f"{extracted.domain}.{extracted.suffix}"
|
||||
|
||||
return "https://" + full_domain
|
||||
|
||||
full_domain = split_domain_or_subdomain_and_path(input_url)
|
||||
print(f"Domain/Subdomain: {full_domain}")
|
||||
|
||||
# Folder to save images
|
||||
save_folder = "downloaded_images"
|
||||
if not os.path.exists(save_folder):
|
||||
os.makedirs(save_folder)
|
||||
|
||||
# Send GET request to the page
|
||||
user_agents = [
|
||||
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
|
||||
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15',
|
||||
'Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1',
|
||||
]
|
||||
headers = {
|
||||
"User-Agent": random.choice(user_agents)
|
||||
}
|
||||
|
||||
response = requests.get(input_url, headers=headers)
|
||||
if response.status_code == 200:
|
||||
# Parse the HTML content with BeautifulSoup
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
|
||||
# Find all image tags
|
||||
images = soup.find_all('img')
|
||||
|
||||
# Loop through image tags
|
||||
for idx, img in enumerate(images):
|
||||
img_url = img.get('src')
|
||||
|
||||
# Check if img_url is complete; if not, adjust it accordingly
|
||||
if not img_url.startswith("http"):
|
||||
img_url = full_domain + "/" + img_url
|
||||
img_url = img_url.split("&")
|
||||
img_url = img_url[0]
|
||||
try:
|
||||
# Send request to the image URL
|
||||
img_data = requests.get(img_url, headers=headers).content
|
||||
# Define file name and path
|
||||
img_name = os.path.join(save_folder, f"image_{idx}.jpg")
|
||||
# Write image data to file
|
||||
with open(img_name, 'wb') as img_bytes:
|
||||
img_bytes.write(img_data)
|
||||
|
||||
print(f"Downloaded {img_name}")
|
||||
time.sleep(1)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Failed to download {img_url}. Error: {e}")
|
||||
else:
|
||||
print("Failed to retrieve the page.")
|
||||
82
python/scrape/get_images_montreuil.py
Normal file
82
python/scrape/get_images_montreuil.py
Normal file
@@ -0,0 +1,82 @@
|
||||
import random
|
||||
import urllib
|
||||
import requests
|
||||
import time
|
||||
from bs4 import BeautifulSoup
|
||||
from urllib.parse import urlparse
|
||||
import os
|
||||
import sys
|
||||
import tldextract
|
||||
|
||||
# URL of the webpage with images
|
||||
input_url = sys.argv[1]
|
||||
|
||||
# extract full domain
|
||||
def split_domain_or_subdomain_and_path(url):
|
||||
# Parse the URL
|
||||
parsed_url = urlparse(url)
|
||||
extracted = tldextract.extract(url)
|
||||
|
||||
# Build the full domain, including subdomain if present
|
||||
if extracted.subdomain:
|
||||
full_domain = f"{extracted.subdomain}.{extracted.domain}.{extracted.suffix}"
|
||||
else:
|
||||
full_domain = f"{extracted.domain}.{extracted.suffix}"
|
||||
|
||||
return "https://" + full_domain
|
||||
|
||||
full_domain = split_domain_or_subdomain_and_path(input_url)
|
||||
print(f"Domain/Subdomain: {full_domain}")
|
||||
|
||||
# Folder to save images
|
||||
save_folder = "downloaded_images"
|
||||
if not os.path.exists(save_folder):
|
||||
os.makedirs(save_folder)
|
||||
|
||||
# Send GET request to the page
|
||||
#response = se.get(input_url)
|
||||
user_agents = [
|
||||
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
|
||||
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15',
|
||||
'Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1',
|
||||
]
|
||||
headers = {
|
||||
"User-Agent": random.choice(user_agents)
|
||||
}
|
||||
|
||||
response = requests.get(input_url, headers=headers)
|
||||
if response.status_code == 200:
|
||||
# Parse the HTML content with BeautifulSoup
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
|
||||
# Find all image tags
|
||||
images = soup.find_all('img')
|
||||
|
||||
# Loop through image tags
|
||||
for idx, img in enumerate(images):
|
||||
img_url = img.get('src')
|
||||
|
||||
# Check if img_url is complete; if not, adjust it accordingly
|
||||
if not img_url.startswith("http"):
|
||||
img_url = full_domain + "/" + img_url
|
||||
img_url = img_url.split("&")
|
||||
img_url = img_url[0]
|
||||
print(img_url)
|
||||
try:
|
||||
print(img_url)
|
||||
# Send request to the image URL
|
||||
img_data = requests.get(img_url, headers=headers).content
|
||||
# Define file name and path
|
||||
img_name = os.path.join(save_folder, f"image_{idx}.jpg")
|
||||
# Write image data to file
|
||||
with open(img_name, 'wb') as img_bytes:
|
||||
img_bytes.write(img_data)
|
||||
img_bytes.write(requests.get(img_url).content)
|
||||
|
||||
print(f"Downloaded {img_name}")
|
||||
time.sleep(1)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Failed to download {img_url}. Error: {e}")
|
||||
else:
|
||||
print("Failed to retrieve the page.")
|
||||
BIN
python/videos/queercut.mp4
Normal file
BIN
python/videos/queercut.mp4
Normal file
Binary file not shown.
Reference in New Issue
Block a user