23 Commits

Author SHA1 Message Date
95421af328 README change 2025-10-01 15:01:52 +02:00
621bdc3a2d julien add a file 2025-10-01 14:34:31 +02:00
Mara Karagianni
23091bd7ff fix code format in README 2025-10-01 12:40:10 +02:00
Mara Karagianni
5d5d48e13e fix typo in README 2025-10-01 12:27:00 +02:00
Mara Karagianni
d17efbb2f2 add more git steps in README 2025-10-01 11:58:43 +02:00
f77082d6cf update rnn file 2025-04-22 20:40:00 +02:00
9ae58a0f4f add rnn test file 2025-04-22 20:27:42 +02:00
bf12d1e5d4 edit rnn 2025-04-22 20:26:56 +02:00
278eee7246 add rnn file 2025-04-22 20:22:36 +02:00
Mara Karagianni
2bb42ab8f6 add new dir for machine learning 2025-04-22 20:16:47 +02:00
Mara Karagianni
7d55d258ac add user agents in scraping images script 2025-04-22 20:11:41 +02:00
Mara Karagianni
a09846193d Add scraping for archives site 2025-03-13 18:03:32 +01:00
Mara Karagianni
20c041878d readme: add beautifulsoup 2024-12-04 17:09:14 +01:00
Mara Karagianni
ae8cf7247d scrape: add howtos for video collage 2024-12-04 13:12:06 +01:00
Mara Karagianni
342a45a4f2 add presentation 2024-11-29 14:45:11 +01:00
Mara Karagianni
b09af3ea95 update readme 2024-11-29 14:40:07 +01:00
Mara Karagianni
79fa977d72 add exercise, presentation and tutorials 2024-11-29 14:26:43 +01:00
Mara Karagianni
6304fb4def videos: add sample for videogrep exercise 2024-11-28 14:15:31 +01:00
Mara Karagianni
138e6b30d7 readme: add videogrep links 2024-11-27 15:49:39 +01:00
Mara Karagianni
4e5642a83b art python intro 2024-11-06 21:55:05 +01:00
Mara Karagianni
e13b25bbcd translate scraping README in french 2024-10-31 19:48:56 +01:00
Mara Karagianni
ad3a364347 add python image scrape script 2024-10-31 19:25:04 +01:00
Mara Karagianni
7ef8f2ffd5 add gitignore file 2024-10-31 19:06:32 +01:00
20 changed files with 350 additions and 34 deletions

10
.gitignore vendored Normal file
View File

@@ -0,0 +1,10 @@
# Environments
venv/
.venv/
/pyvenv.cfg
.python-version
# Media
media/
downloaded_images
downloaded_videos

View File

@@ -1,13 +1,56 @@
Sabri Stevelinck
# git repo for art num # git repo for art num
*the wiki will be updated with more information and usefull snipet. fell free to contribute* *the wiki will be updated with more information and usefull snipet. fell free to contribute*
test port ssh test port ssh
## how to ## how to
1. clone this repo to you own computer 1. clone this repo to you own computer
` git clone https://git.erg.school/P039/art_num_2024.git` ```
git clone https://git.erg.school/P039/art_num_2024.git
```
2. check before eatch courses for update 2. check before eatch courses for update
` git pull ` ```
git pull
```
3. create your own branch
```
git checkout -b prenom-nom
```
4. create and update README with your name
```
nano README
```
5. save the file with `ctrl+X`
6. check git tracking
```
git status
```
7. add and commit the change in the git
```
git add README.md
git commit -m 'change README'
```
8. check the commit was registered
```
git log
git show <commit-hash>
```
9. push your branch on gitea
```
git push origin prenom-nom
```
10. update your branch with latest main branch
```
git checkout main
git pull origin main
git checkout prenom-nom
git rebase main
git log
```
## content ## content

0
ThisIsANewFile.txt Normal file
View File

2
python/EXERCISE.md Normal file
View File

@@ -0,0 +1,2 @@
L'étudiant.e s'engage dans une recherche créative en faisant un commentaire, une réflexion ou un detournement des plateformes culturelles, ou des débats politiques, qui sont médiatisés numériquement (par example en ligne). Cet exercice doit avoir un élément artistique et faire appel à Python en tout, ou en partie dans son exécution. Vous pouvez consulter le matériel pertinent de la classe python, tel que les diapositives de présentation, les exemples de code et les références de tutoriels, à l'adresse suivante:
https://git.erg.school/P039/art_num_2024/src/branch/main/python

View File

@@ -1 +1,37 @@
git repo for art num ## Python course
### install pip3 or pip
Windows/Linux: [Installation](https://pip.pypa.io/en/stable/installation/)
MacOS: [Install Homebrew](https://brew.sh/).
After installation you may need to add homebrew to your PATH. Exectute the code, line by line, from the messages at the end of the homebrew installation in your terminal. You may also read how to do it [here](https://usercomp.com/news/1130946/add-homebrew-to-path).
Verify brew is installed with `brew --version`. If not errors, then install pip with `brew install pip`.
Depending on your system, you may have pip or pip3. Execute `pip --version` or `pip3 --version` and see which one works for you.
### virtualenv
Windows/Linux: [Installation](https://virtualenv.pypa.io/en/latest/installation.html)
MacOS: Execute `brew install virtualenv`
How to use it for all systems -> [User Guide](https://virtualenv.pypa.io/en/latest/user_guide.html)
### videogrep
tutorial: https://lav.io/notes/videogrep-tutorial/
code: https://github.com/antiboredom/videogrep/tree/master
videos: https://antiboredom.github.io/videogrep/
### python tutorials here
[python basics](./introduction/README.md)
[python scraping](./scrape/README.md)
### python tutorials online
RealPython => need to create free account
[Work with python from the terminal or with a code editor](https://realpython.com/interacting-with-python/)
[Variables](https://realpython.com/python-variables/)
[User input from terminal/keyboard](https://realpython.com/python-keyboard-input/)
[if/ elif/ else](https://realpython.com/courses/python-conditional-statements/)
[Adding strings together](https://realpython.com/python-string-concatenation/)
[Data types](https://realpython.com/python-data-types/)
### artisitc references
[presentation - slides](./presentation-en.pdf)

BIN
python/audio/opensource.mp4 Normal file

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,14 @@
## introduction
```
For variables --> see all scripts
Working with lists --> see bisous.py programming.py
For user input from terminal or keyboard --> see bisous.py
print
for loop --> see bisous.py
conditional statements if/else/elif --> see bisous.py
break --> bisous.py
while loop --> see missing.py
function --> missing.py
random --> programming.py
```

View File

@@ -0,0 +1,19 @@
# Initialiser les variables
queer = "mon amour"
bisous = ["ma biche", "mom bébé", "mon amour", "mon chéri.e"]
# Demander une saisie à l'utilisateur
amoureuxse = input("Entrez le nom de votre bien-aiméx : ")
# Boucler à travers la liste et imprimer le message correspondant
for bisou in bisous:
if bisou == queer:
print("bisou pour toi", bisou, amoureuxse)
elif amoureuxse == "python":
print("on dirait un.e geek")
break
else:
print(f":* :* {bisou}, {amoureuxse}")

View File

@@ -0,0 +1,12 @@
from time import sleep
love = True
how = "so"
def missing(so):
print(f"I miss you {so} much")
while love:
missing(how)
how += " so"
sleep(0.2)

View File

@@ -0,0 +1,25 @@
"""
poem converted from bash programming.sh by Winnie Soon, modified from The House of Dust, 1967 Alison Knowles and James Tenney
"""
import random
import time
# listes for different elements
kisses = ["DEAREST", "SWEETHEART", "WORLD", "DARLING", "BABY", "LOVE", "MONKEY", "SUGAR", "LITTLE PRINCE"]
material = ["SAND", "DUST", "LEAVES", "PAPER", "TIN", "ROOTS", "BRICK", "STONE", "DISCARDED CLOTHING", "GLASS", "STEEL", "PLASTIC", "MUD", "BROKEN DISHES", "WOOD", "STRAW", "WEEDS", "FOREST"]
location = ["IN A GREEN, MOSSY TERRAIN", "IN AN OVERPOPULATED AREA", "BY THE SEA", "BY AN ABANDONED LAKE", "IN A DESERTED FACTORY", "IN DENSE WOODS", "IN JAPAN", "AMONG SMALL HILLS", "IN SOUTHERN FRANCE", "AMONG HIGH MOUNTAINS", "ON AN ISLAND", "IN A COLD, WINDY CLIMATE", "IN A PLACE WITH BOTH HEAVY RAIN AND BRIGHT SUN", "IN A DESERTED AIRPORT", "IN A HOT CLIMATE", "INSIDE A MOUNTAIN", "ON THE SEA", "IN MICHIGAN", "IN HEAVY JUNGLE UNDERGROWTH", "BY A RIVER", "AMONG OTHER HOUSES", "IN A DESERTED CHURCH", "IN A METROPOLIS", "UNDERWATER", "ON THE SCREEN", "ON THE ROAD"]
light_source = ["CANDLES", "ALL AVAILABLE LIGHTING", "ELECTRICITY", "NATURAL LIGHT", "LEDS", "MOON LIGHT", "THE SMALL TORCH"]
inhabitants = ["PEOPLE WHO SLEEP VERY LITTLE", "VEGETARIANS", "HORSES AND BIRDS", "PEOPLE SPEAKING MANY LANGUAGES WEARING LITTLE OR NO CLOTHING", "CHILDREN AND OLD PEOPLE", "VARIOUS BIRDS AND FISH", "LOVERS", "PEOPLE WHO ENJOY EATING TOGETHER", "PEOPLE WHO EAT A GREAT DEAL", "COLLECTORS OF ALL TYPES", "FRIENDS AND ENEMIES", "PEOPLE WHO SLEEP ALMOST ALL THE TIME", "VERY TALL PEOPLE", "AMERICAN INDIANS", "LITTLE BOYS", "PEOPLE FROM MANY WALKS OF LIFE", "FRIENDS", "FRENCH AND GERMAN SPEAKING PEOPLE", "FISHERMEN AND FAMILIES", "PEOPLE WHO LOVE TO READ", "CHEERFUL KIDS", "QUEER LOVERS", "NAUGHTY MONKEYS", "KIDDOS"]
# Infinite loop
while True:
print("HELLO", random.choice(kisses))
print(" A TERMINAL OF BLACK", random.choice(material))
print(" ", random.choice(location))
print(" PROGRAMMING", random.choice(light_source))
print(" KISSED BY", random.choice(inhabitants))
print(" ")
# Delay for 3.5 seconds
time.sleep(3.5)

View File

View File

@@ -0,0 +1 @@
#copy code here

View File

@@ -0,0 +1 @@
# copy here your code

BIN
python/presentation-en.pdf Normal file

Binary file not shown.

24
python/scrape/README.md Normal file
View File

@@ -0,0 +1,24 @@
## Un script qui extrait des images depuis une URL donnée
Nous devons installer:
```
pip install requests beautifulsoup4 tldextract
```
Exécutez le script avec :
```
python get_images.py https://www.freepik.com/images
```
Remplacez lURL par le lien que vous souhaitez extraire.
**Remarque:** Le scraping doit être effectué de manière éthique, en respectant les règles du fichier robots.txt et les conditions d'utilisation du site.
### Beautiful soup
[INSTALL](https://beautiful-soup-4.readthedocs.io/en/latest/#installing-beautiful-soup)
[HOWTO](https://beautiful-soup-4.readthedocs.io/en/latest/#making-the-soup)
[WORK with HTML TAG](https://beautiful-soup-4.readthedocs.io/en/latest/#navigating-the-tree)
### Collage avec des images
[Make video with all same size images](https://pythonexamples.org/python-opencv-cv2-create-video-from-images/)
[Change images to same size then make a video](https://www.geeksforgeeks.org/python-create-video-using-multiple-images-using-opencv/)

View File

@@ -0,0 +1,78 @@
import random
import urllib
import requests
import time
from bs4 import BeautifulSoup
from urllib.parse import urlparse
import os
import sys
import tldextract
# URL of the webpage with images
input_url = sys.argv[1]
# extract full domain
def split_domain_or_subdomain_and_path(url):
# Parse the URL
parsed_url = urlparse(url)
extracted = tldextract.extract(url)
# Build the full domain, including subdomain if present
if extracted.subdomain:
full_domain = f"{extracted.subdomain}.{extracted.domain}.{extracted.suffix}"
else:
full_domain = f"{extracted.domain}.{extracted.suffix}"
return "https://" + full_domain
full_domain = split_domain_or_subdomain_and_path(input_url)
print(f"Domain/Subdomain: {full_domain}")
# Folder to save images
save_folder = "downloaded_images"
if not os.path.exists(save_folder):
os.makedirs(save_folder)
# Send GET request to the page
user_agents = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15',
'Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1',
]
headers = {
"User-Agent": random.choice(user_agents)
}
response = requests.get(input_url, headers=headers)
if response.status_code == 200:
# Parse the HTML content with BeautifulSoup
soup = BeautifulSoup(response.text, 'html.parser')
# Find all image tags
images = soup.find_all('img')
# Loop through image tags
for idx, img in enumerate(images):
img_url = img.get('src')
# Check if img_url is complete; if not, adjust it accordingly
if not img_url.startswith("http"):
img_url = full_domain + "/" + img_url
img_url = img_url.split("&")
img_url = img_url[0]
try:
# Send request to the image URL
img_data = requests.get(img_url, headers=headers).content
# Define file name and path
img_name = os.path.join(save_folder, f"image_{idx}.jpg")
# Write image data to file
with open(img_name, 'wb') as img_bytes:
img_bytes.write(img_data)
print(f"Downloaded {img_name}")
time.sleep(1)
except Exception as e:
print(f"Failed to download {img_url}. Error: {e}")
else:
print("Failed to retrieve the page.")

View File

@@ -0,0 +1,82 @@
import random
import urllib
import requests
import time
from bs4 import BeautifulSoup
from urllib.parse import urlparse
import os
import sys
import tldextract
# URL of the webpage with images
input_url = sys.argv[1]
# extract full domain
def split_domain_or_subdomain_and_path(url):
# Parse the URL
parsed_url = urlparse(url)
extracted = tldextract.extract(url)
# Build the full domain, including subdomain if present
if extracted.subdomain:
full_domain = f"{extracted.subdomain}.{extracted.domain}.{extracted.suffix}"
else:
full_domain = f"{extracted.domain}.{extracted.suffix}"
return "https://" + full_domain
full_domain = split_domain_or_subdomain_and_path(input_url)
print(f"Domain/Subdomain: {full_domain}")
# Folder to save images
save_folder = "downloaded_images"
if not os.path.exists(save_folder):
os.makedirs(save_folder)
# Send GET request to the page
#response = se.get(input_url)
user_agents = [
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15',
'Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1',
]
headers = {
"User-Agent": random.choice(user_agents)
}
response = requests.get(input_url, headers=headers)
if response.status_code == 200:
# Parse the HTML content with BeautifulSoup
soup = BeautifulSoup(response.text, 'html.parser')
# Find all image tags
images = soup.find_all('img')
# Loop through image tags
for idx, img in enumerate(images):
img_url = img.get('src')
# Check if img_url is complete; if not, adjust it accordingly
if not img_url.startswith("http"):
img_url = full_domain + "/" + img_url
img_url = img_url.split("&")
img_url = img_url[0]
print(img_url)
try:
print(img_url)
# Send request to the image URL
img_data = requests.get(img_url, headers=headers).content
# Define file name and path
img_name = os.path.join(save_folder, f"image_{idx}.jpg")
# Write image data to file
with open(img_name, 'wb') as img_bytes:
img_bytes.write(img_data)
img_bytes.write(requests.get(img_url).content)
print(f"Downloaded {img_name}")
time.sleep(1)
except Exception as e:
print(f"Failed to download {img_url}. Error: {e}")
else:
print("Failed to retrieve the page.")

BIN
python/videos/queercut.mp4 Normal file

Binary file not shown.

View File

@@ -1,31 +0,0 @@
# artistic ref : usage de python
## [Computational Poems : Les deux, Nick Montfort](https://nickm.com/2/les_deux.html)
- US digital artist / chercheur
- générateur de poème online dynamique (javascript)
- poème multilangue (fr, esp, cn) => dispositif de traduction (js)
## [The Great Netfix, *Ritasdatter & Gansing*](http://netflix.lnd4.net/)
*a video store after the end of the world*
- notion de de-clouding : proposition speculative de redistribution de la “cloub-base” contemporaine
- activité de **scraping** de film Netflix via VPN (utilitaire de deplacement immatérielle de la localisation du client) & enregistrement **VHS**
- dispositif rspi (WLAN) - tape recorder VHS
## [Videogrep, *Sam Lavigne* (2014)](https://antiboredom.github.io/videogrep/)
- python script that searches through dialog on videos and combine then in a flesh video
- e.g : condense toute les itération dune expression dune video originale
- visibilisation de normalisation dusage de stratégie marketing (element de langage) dans contexte politique -partisant-
- commande ligne tool / python module en libre acces sur archive github du project
```videogrep -- input path/to/vid.mp4 --search 'search phrase'```
## [Unerasable Characters, *Winnie Soon*](https://calls.ars.electronica.art/2023/prix/winners/7149/)
Prix Ars Electronica, 2023
- scraping data censurées/suprimées from Weibo (chinese social media == twitter)
- dispersion des ideogram dans matrice lumineuse physique
- concatenation de lensemble des caractère par machine learning (Tensor Flow) pour republication sur source (Weibo) et production dune édition physique