- Frontend domain: https://200okforever.netlify.app/
- Get into the github main page of student-internship.
- Click the green
Code
button on your right, the copy the HTTPS url. - Open your command line, type
git clone
then paste the url, then press enter. - On your command line,
cd student-internship
directory, thencd client
- If you haven't install npm, follow this guide to install npm
- On your command line,
npm install --force
, (This may take serveral minutes, please be patient) - On your command line,
npm start
to start the browser (this may take another few minutes) - During your waiting time, start another command line, then cd into
/student-internship/server
- if you haven't install pip, install pip
- On your command line,
pip install -r requirements.txt
(this may take another few minutes) - try run
flask run
, you may seeImportError: No module named xxx
(xxx is a particular module name), runpip install xxx
to install this module. (This step may happened serveral times) - If no
ImportError: No module named xxx
exist, instead, you can seeRunning on http://127.0.0.1:5004/ (Press CTRL+C to quit)
, this means the backend has been successfully set up. - At the same time, check whether you can see
Local: http://localhost:3002
andOn Your Network: http://192.168.1.24:3002
, in your first command line, this means your front end has been set up, your browser should automatically open the webapp. If not opened, you can manually typehttp://localhost:3002
into your browser and open the webapp.
- 5000 requests/month and 100 requests/s
- https://rapidapi.com/letscrape-6bRBa3QguO5/api/google-jobs-search/
import requests
url = "https://google-jobs-search.p.rapidapi.com/search"
querystring = {"query":"Full time web developer jobs in new york"}
headers = {
"X-RapidAPI-Key": "a5da9e2614msh4ff783e33e0d183p1ac95fjsned37eddfa900",
"X-RapidAPI-Host": "google-jobs-search.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers, params=querystring)
print(response.text)
- To get company info
- https://rapidapi.com/iscraper/api/linkedin-profiles-and-company-data/
- 100 requests/month and 10 requests/min
import requests
url = "https://linkedin-profiles-and-company-data.p.rapidapi.com/profile-details"
payload = {
"profile_id": "williamhgates",
"profile_type": "personal",
"contact_info": False,
"recommendations": False,
"related_profiles": False
}
headers = {
"content-type": "application/json",
"X-RapidAPI-Key": "a5da9e2614msh4ff783e33e0d183p1ac95fjsned37eddfa900",
"X-RapidAPI-Host": "linkedin-profiles-and-company-data.p.rapidapi.com"
}
response = requests.request("POST", url, json=payload, headers=headers)
print(response.text)
- To recommend learning course based on jobs
- Use search to get tutorials on YouTube
- It should be on Google cloud platform and have not tested yet(Should be done before phase1 end)
- https://developers.google.com/youtube/v3/docs/search/list
In this project, we used teams and discord to communicate. Also, multiple meeting has been regularly conducted to catch up everyone's progress.
We collaborately created text version of API documentation and Swagger UI, which make the apis more easy to understand and eases the
communication between the front end team and the back end team. We used Jira Board to conduct product management as well.
Jira Board
Jira Card Detail Info
Teams Chat
Discord Chat
One of the Meeting Minutes
Swagger UI
API doc
In the project, we strongly used git version control system. We created branches for each of the tasks.
Sourcetree Screenshot