Skip to main content

CORS - Cross-origin resource sharing





Today, there are many applications that depend on APIs to access different resources. Some of the popular APIs include weather, time, and fonts. 

There are also servers that host these APIs and ensure that information is delivered to websites and other end points. Therefore, making cross-origin calls, is a popular use case for the modern web application. 

Let’s say accessing images, videos, iframes, or scripts from another server. This means that the website is accessing resources from a different origin or domain. When building an application to serve up these resources with Express, a request to such external origins may fail. This is where CORS comes in to handle cross-origin requests. 



What is CORS? 

CORS stands for Cross-Origin Resource Sharing. It allows us to relax the security applied to an API. This is done by bypassing the Access-Control-Allow-Origin headers, which specify which origins can access the API. 

In other words, CORS is a browser security feature that restricts cross-origin HTTP requests with other servers and specifies which domains access your resources. 

Ex: When we give the Boolean value which indicates to allow cross-site AccessControl as true, request is being made using credentials such as cookies and authorization headers.

Client side: 




Server Side: 

To send credentials or allow access to resource to a specific domain we give their origin. 

Ex: To send credentials or allow access to resource to multiple domains. 


Ex: Any page can have access, 


Performing requests with API

Not every method should be exposed to other origins. Within CORS middleware, you can specify which methods can be accessed by the CORS policy. 



To specify which routes of your server can be accessed. 




The problem we faced. 

In our case, the sent cookie-session was set in Response-header, in set-cookie. But the cookie was not reflecting in Cookie-Storage. Now we have overcome the problem by passing the credentials in CORS. 


Conclusion: 

When you deploy an application on the server, you should not accept requests from every domain. Instead, you should specify which origin can make requests to your server. This way, you are able to block users who attempt to clone your site or make requests from an unauthorized server. This is important a security measure.

Comments

Popular posts from this blog

Things to consider when adopting Cloud Computing

    If you are someone who is new cloud computing and is deciding to adopt cloud computing, there are several factors you have to consider. Define the role of Cloud :  Are you looking to host your website or a mobile app or you just require storage space for your files.  Business flows and Priorities of the Solution :  At what point, does your cloud solution fit in. Do I already have a system which I need to upgrade. Find the priorities of the system of your business. Need for Integrations with Internal and External systems :  Based on your application needs, we need to figure out the Internal and External services that is essential part or something you cannot replace with your new cloud solution. Once we identify these sub systems and find a possible way to work with your Cloud Framework. Financials of running the solution:  Running a cloud deployment can be cost effective or a costly affair, based on how it is setup. Different services have differen...

What Is A Web Crawler?

  A web crawler is an internet bot that indexes the content of websites. It can automatically extract target information and data from websites and export data into structured formats (list/table/database). Have you ever wondered how search engines like Google, fetch data from different parts of the web and serve it to you as a user, based on your query? The method used by applications like this is termed crawling. Search engines work and by crawling and indexing billions of web pages using different web crawlers called web spiders or search engine bots. What these web spiders do is follow links from each web page that have been indexed to discover new ages. Is it legal to crawl a website? So is it legal or illegal? Web scraping and crawling aren't illegal by themselves. After all, you could scrape or crawl your own website, without a hitch. Startups love it because it's a cheap and powerful way to gather data without the need for partnerships. Step by step to build a web crawl...