Skip to main content

HTTP request to get data import into Database

What is HTTP request?
It's a logic that can be used in order to pull data using api's created by individuals both private and public. Some API's are build in house to use it for own use and some are public API's, there are series of methods that can used in order to fetch data using these public API's.
Let's see some of them:
1) GET: it is used to request some data from the frontend, usually the website URL, access token is required and in the URL itself the path is specified that someone is looking for.
Let's understand with an example:-
For a college website xyzuniversity.com, if I need to call the record for a student name "Alex" I will be writing the URL like
GET http://university.com/student-record/alex:?

And in the response I will be getting the associated record in JSON format.

Here we used the GET method to call this data, like this we got multiple methods like PUT, HEAD, DELETE, PATCH,OPTIONS etc that can be used based on the need.

Let's come back to the original concept where I need to pull data into the database by using API calls.

So now I will be using any tool like Python or ADF where my source and sink will be managed, in the source I will be using the web method or API method to call the data and in the destination I will be using the connection to my database, when I will run the pipeline using the parameters like methods, URL, API type then a request will be sending to the server and the data will be fetched which then will be loaded to the destination Databases.
Here we need to be sure that schema for the source and destination should match identically to avoid any errors and URL address alongwith the token or credentials should be already verified and tested.

In this way the HTTP request can be used to populate databases from frontend generated dataset.

Comments

Popular posts from this blog

How to be a DATA Analyst

To become a data analyst one need to be good at maths, basically numbers and visuals are two things every data analyst must know about. In my experience I believe domain knowledge and understanding the business is one of the key factor one would be needing to sort out pattens or analysis from the business data. Until and unless I don't know what my details is telling to me how I can be sure what to analyse. After all these basically a series of tool is important so that the work of analyst becomes easy, let's say tools like Excel, SQL , visualisation (tableau, powerbi), cloud computing (azure,AWS), modules in python like matplotlib, scikit learn, seaborn, pandas are some of the basic necessities that need to be fullfill. Some guidelines if I have to say is: 1) always try with clean data, then move to dirty data(mostly wrong columns values, mismatch column values, redundant data) 2) making quick visuals are always a better approach to increase confidence and skill in the path 3)...

SQL Interview Question

  SQL Interview Question: You are given two unrelated tables: Product — containing product details ProductSubcategory — containing subcategory details There are no common columns , and no foreign key or primary key relationships between them. Task: Write a SQL query to perform a join between these two tables, despite having no direct relationship. Query And Related Table: The problem with the above script is when you have exact match between the "product description" column and "subname" column then this code will work, but let say you have difference like Gadget and Gadgets then in this case the query will fail.  Feel free to post you comments over this solution, my approach would be: 1) Do the match over the letter by letter then count the total letter matched and then total letters if the percentage for this is above 80% then this record must be in the join condition. Basically perform the lookup with the help of SQL code. 2) Make abridge...