Want to collaborate?

Right now, you can get in touch with me for a few things:
Writing
Content Creation
Open Source Contributions
+ more
Follow

Aman Jagadhabhi

  • @amanj31
  • Network Security Engineer, Cloud Engineer, Full Stack Web Developer
  • He/Him
  • India
I’m Engineer and Professional Cyber Security Engineer and Full stack developer with 3 years experience leading both front and backend development!!
Read more
I'm available for
Aman's Collections

Articles and blogs

10 Highlights
What Aman's working on

What Aman's working on

2022
May 22, 2022

Form/Form Validation!



JavaScript Forms A form is a document that contains spaces (also known as fields or placeholders) for writing or selecting information for a series of documents with similar contents. Except for a serial number, the printed components of the documents are usually the same.

When completed, forms can be a statement, a request, an order, and so on; a check can also be a form. There are other tax forms; filling one out is a requirement to calculate how much tax one owes, and/or the form is a request for a refund. Also see the tax return.

When the information acquired on the form needs to be transferred to different departments within an organization, forms might be filled out in duplicate (or triplicate, meaning three times). This is possible. Forms in html are basically a container with different elements included in it.

Forms in html is created in this way:

<forms action="\signup" method="post" Id="signup">
</forms>

Basically there are two attributes in the form:

  • Action: This is an attribute which describes the uniform resource locator(URL) that will process on submission.
  • Method: This attribute specifies the http method that will be used to submit the form. While this attribute can accept different values, the most common methods however include get and post. The GET method sends the form data to the server via the action(URL) specified as query string. While the POST method will send the form data in the request body.
The Html form element also provides some other useful attributes:

  • SUBMIT: Used to submit your data to the server and to wherever you want it
  • RESET: Resets the form data.
We use other attributes known as input, this create a box, either a text, message or any other items you need to input to fill the box!

<Form>
<Label for name="fullname">Name</Label>
<input type="text" name="fullname" placeholder="Your fullname"/>
<Label for name="email">Email<label/>
<input type="text" name="email" placeholder="enter your email"/>
<button type="submit" id="submit">Submit</button>
</Form>

Finding Form Elements in Javascript To locate form elements in javascript we use the DOM(document object method) selecting method such as :

  • document.getElementByid()
  • document.getElementsByClassname()
  • document.getElementsByTagname() and many more
we can assign a value ,say form and locate an item inside the htmlform above using any of the DOM methods say "document.getElementByid()"

const form= document.getElementById("submit")

html form could have multiple document:

document.form

this returns an array of html form collections... in html we might have different forms so to find any of the forms we use indexes:

document.form[0]
document.form[1]

this locates the first form elements in the html list of forms.

Submitting Forms There are submit buttons on all forms. The submit event is fired before the request is delivered to the server when you submit the form. This allows you to validate the form data. If the form data is incorrect, you can cancel the submission.

To add an event listener to the submit event, use the form element's 'addEventListener()' method as follows:

const form="document.getElementById("signup")
form.addEventListener('submit', (event)=> {
//work on form data
});

To prevent the form from submitting, you use the preventDefault() method of the event object in the submit event handler , so we have:

form.addEventListener('submit', (event) => {
    // don't submit form
    event.preventDefault();
});

Typically, you can call the event.preventDefault() method when the form data is invalid. in order to submit form in JavaScript, you call the submit() method of the form object:

form.submit();

It should be noted that the form.submit() method does not trigger the submit event. As a result, before executing this method, you should always validate the form.

JavaScript Form Validation HTML form validation can be done by JavaScript.

If a form field (fullname) is empty, this function alerts a message, and returns false, to prevent the form from being submitted:

JavaScript Example

function validateForm() {
  let t= document.forms["myForm"]["fullname"].value;
  if (t == "") {
    alert("Name cannot be left empty");
    return false;
  }
}

The function can be :

HTML Form Example

<form name="myForm" action="/action.php" onsubmit="return validateForm()" method="post">
Name: <input type="text" name="fullname">
<input type="submit" value="Submit">
</form>

Automatic HTML Form Validation HTML form validation can be done automatically by the browser:

If a form field (fullname) is null, the attribute "required" prevents this form from submitting:

HTML Form Example

<form action="/action.php" method="post">
  <input type="text" name="fullname" required>
  <input type="submit" value="Submit">
</form>

Automatic HTML form validation is a new feature so it does not work does in all versions of internet explorer expecially version 9 and below.

Data Validation Data validation is the act of making sure that users input is neat, correct, and functional.

Typical validation tasks are:

  • Has the user filled in all the required fields?
  • has the user entered a valid date and time?
  • has the user entered a text in a numeric field?
  • The most common reason for data validation is to ensure correct user input.
Validation can be defined by a variety of approaches and applied in a variety of ways, including:

Server side validation is conducted by a web server after input has been provided to the server.

client-side validation is executed by web browser before submitting data to a web server.

HTML Constraint Validation HTML5 added a new HTML validation concept known as constraint validation.

 The following factors are used to validate HTML constraints:

  • Constraint validation using HTML Input Attributes
  • Constraint validation with CSS Pseudo Selectors
  • Constraint validation with DOM Properties and Methods
  • Constraint Validation with HTML Input Attributes
summary We use <forms> to start a form element. Forms must be validated to make users information clear, neat and correct. Its important to add an eventListener to validate the forms

Read more
May 22, 2022

How to use OCR with JavaScript?


Intro :

The rise of artificial intelligence in recent years has been driven by a phenomenon of digitalization that is omnipresent in all professional environments. This digital transformation has been initiated by most companies, large and small, and one of the main axes of transformation is the digitization of data. It is for this purpose that a computer vision service has been developed : Optical Character Recognition (OCR), commonly known as OCR.

The origin of OCR dates back to the 1950s, when David Shepard founded Intelligent Machines Research Corporation (IMRC), the world’s first supplier of OCR systems operated by private companies for converting printed messages into machine language for computer processing.

Today there is no longer a need for a system designed for a particular font. OCR services are intelligent, and OCR is even one of the most important branches of computer vision, and more generally of artificial intelligence. Thanks to OCR, it is possible to obtain a text file from many digital supports:

  • PDF file
  • PNG, JPG image containing writings
  • Handwritten documents
The use of OCR for handwritten documents, images or PDF documents can concern companies in all fields and activities. Some companies may have a more critical need for OCR for character recognition on handwriting, combined with Natural Language Processing (NLP) : text analysis. For example, the banking industry uses OCR to approve cheques (details, signature, name, amount, etc.) or to verify credit cards (card number, name, expiration date, etc.). Many other business sectors make heavy use of OCR, such as health (scanning of patient records), police (license plate recognition) or customs (extraction of passport information), etc.

How OCR works: OCR technology consists of 3 steps:

  • Image pre-processing stage, which consists of processing the image so that it can be exploited and optimized to recognize the characters. Pre-processing manipulations include: realignment, de-interference, binarization, line removal, zoning, word detection, script recognition, segmentation, normalization, etc.
  • Extraction of the statistical properties of the image. This is the key step for locating and identifying the characters in the image, as well as their structures.
  • Post-processing stage, which consists in reforming the image as it was before the analysis, by highlighting the “bounding boxes” (rectangles delimiting the text in the image) of the identified character sequences: ‍

This article briefly treats how to use OCR with JavaScript. We will see on this article that there are many ways to do it, including open source and cloud APIs engines.

Open source engines are available for free, you can often find those solutions on github. You just need to download the library and use these engines directly from your machine. On the contrary, OCR cloud engines are provided by AI providers, they are selling you requests that you can process via their APIs. They can sell requests with a license model (you pay a monthly subscription corresponding to a certain amount of requests) or a pay-per-use model (you pay only for requests you send).

How to choose between open source and cloud engines ?

When you are looking for a OCR engine, the first question you need to ask you is: which kind of engine am I going to choose?

Of course, the main advantage of open source OCR engines is that they are open source. It means that this is free to use and you can use the code in the way you want. It allows you to potentially modify the source code, hyperparameterize the model. Moreover, you will have no trouble with data privacy because you will have to host the engine with your own server, which also means that you will need to set up this server, maintain it and insure you that you will have enough computing power to handle all the requests.

On the other hand, cloud OCR engines are paying but the AI provider will handle the server for you, maintain and improve the model. In this case, you have to accept that your data will transit to the provider cloud. In exchange, the provider is processing millions of data to provide a very performant engine. The OCR provider also has servers that can support millions of requests per second without losing performance or rapidity.

Now that you know the pros and cons of open source and cloud engines, please consider that there is a third option: build your own OCR engine. With this option, you can build the engine based on your own data which guarantees you good performance. You will also be able to keep your data safe and private. However, you will have the same constraint of hosting your engine. Of course, this option can be considered only if you have data science abilities in your company. Here is a summary of when to choose between using existing engines (cloud or open source) and build your own one: ‍ 

Open Source OCR engines:

There are multiple open source OCR engines available, you can find the majority on github. Here are the most famous ones:

Tesseract:

Tesseract is an optical character recognition (OCR) tool for JavaScript. That is, it will recognize and “read” the text embedded in images.

It exists a wrapper that makes Tesseract work with JavaScript. Tesseract has unicode (UTF-8) support, and can recognize more than 100 languages "out of the box".

Tesseract supports various output formats: plain text, hOCR (HTML), PDF, invisible-text-only PDF, TSV and ALTO.

docTR

docTR is an end-to-end OCR provided by Mindee. It uses a two-stage approach: text detection (localizing words), then text recognition (identify all characters in the word). As such, you can select the architecture used for text detection, and the one for text recognition from the list of available implementations.

Cloud OCR engines:

There are many cloud OCR engines on the market and you will have issues choosing the right one. Here are the best providers of the market:

  • Base64
  • Cloudmersive
  • OCR Space
  • Google Cloud Vision Text Recognition
  • Amazon Textract
  • Microsoft Azure Computer Vision OCR
All those OCR providers can provide you good performance for your project. Depending on the language, the quality, the format, the size of your documents, the best engine can vary between all these providers. The only way to know which provider to choose is to compare the performance with your own data.

Eden AI OCR API:

This is where Eden AI enters in your process. Eden AI OCR API allows you to use engines from all these providers with a unique API, a unique token and a simple JavaScript documentation.

By using Eden AI, you will be able to compare all the providers with your data, change the provider whenever you want and call multiple providers at the same time. You will pay the same price per request as if you had subscribed directly to the providers APIs and you will not lose latency performance.

Here is how to use OCR engines in JavaScript with Eden AI SDK:


If you want to call another provider, you just need to change the value of the parameter “providers”. You can see all providers available in Eden AI documentation. Of course, you can call multiple providers in the same request in order to compare or combine them.

Conclusion

As you could see in this article, there are many options to use OCR with JavaScript. For developers who do not have data science skills or who want to quickly and simply use OCR engines, there are many open source and cloud engines available. Each option presents pros and cons, you know have the clues to choose the best option for you.

If you choose a cloud OCR engine, you will need some help to find the best one according to your data. Moreover, OCR providers often update and train their models. It means that you may have to change your provider’s choice in the future to keep having the best performance for your project. With Eden AI, all this work is simplified and you can set up an OCR engine in JavaScript in less than 5 minutes, and switch to the best provider at any moment.

You can create your Eden AI account here and get your API token to start implementing an OCR engine in JavaScript!

Read more
May 22, 2022

Function Declaration vs Function Expression!

Function Declaration:

A function declaration defines a function with the specified parameters. The function declarations are processed before the code block is executed. They are visible everywhere in the block.


function sayHi () {
    ...some code here
}

Function Expression:

A function expression is a function that is stored in a variable. The function expressions are created when the execution flow reaches them.

const sayHi = function () {
    ...some code here
}

Difference:

Hoisting:

  1. In JavaScript, hoisting refers to the availability of the variables and the functions at the beginning of the code.
  2. Function expressions will load only when the interpreter reaches it.
1. sayHi();
2. const sayHi = function () {
   console.log('Hi');
}

The above code will throw an error because the sayHi function is called before it reaches line 2.


Callback:

  1. In JavaScript, a callback is a function that is passed to another function as an argument.
  2. If a callback is a function expression, it will not be available outside of the function that uses it.

const sayHi = function () {
   console.log('Hi');
}

const greetings = (func) => {
    func();
}

greetings(sayHi);


So, in the above code, the sayHi function will also be present global. To avoid this we can use the function expression as below code.

const greetings = (func) => {
    func();
}

greetings( () =>{
   console.log('Hi');
});

Here the callback function will not be in the global scope.


IIFE:

  1. An Immediately-invoked Function Expression (IIFE) is a function that is executed at the time of its creation.
  2. IIFE can be declared using function expression only.

(() => {
  console.log("Hi");
})();

Read more
May 21, 2022
Why AWS S3 is the best fit for Data Lake!
What is a data lake and why should your organisation consider creating one? A data lake is essentially a technique to monetise and derive economic value from your data.


For example, Fortnite from Epic Games has been a wildly successful game that has scaled incredibly quickly (to over 125 million players), and they've accomplished this through an engagement model with their customers in which they monitor interaction with the game in near real-time, run analytics on that data, and in the process constantly customise the game to offer a better player experience. Thus, a real-time feedback loop that makes the game extremely sensitive to user input.

Example AWS DL Arch for Gaming Industry AWS may be utilised as a data lake to host the gaming platform and do the analytics required to keep the players interested.


AWS S3 may be utilised to store the telemetry data from a large number of gamers, and then this near real-time data pipeline architecture can be used to analyse the streaming data in near real-time (comprising of Spark and DynamoDB). Also place this data in batch pipelines (comprised of S3, EMR, etc.) that may be utilised later to do more in-depth analyses, machine learning models, etc. It provides them with a real-time engagement engine as well as far deeper insights over time, allowing them to optimise and add responsive new features to the game.

Data-Lake as a Journey… When you begin to consider what a data lake implies for your organisation, you will go on a journey. Whether you're just starting out and have never done any serious analytics before, or if you're merely used to doing basic business insights, or whether you're a really inventive practitioner of analytics, there is likely always opportunity for progress and evolution.

One of the fundamental concepts of data-lake architecture is that you must be able to evolve around your data as your skills grow, and more importantly, be able to innovate and experiment with your data in a non-disruptive manner; determine if a new algorithm or processing tool will add value and then rapidly scale it into production. This will need you to fundamentally adapt your tools and procedures to the data.

Therefore, when we consider constructing data-lake architectures, we want to ensure that they can be constructed at a rate that allows you to change and innovate at the optimal rate for your organisation.

Why Amazon Web Services for Big Data Analytics and Data Lakes? A major component of this is agility. You want to innovate as quickly as possible without being hindered by the infrastructure/tools/platform you're utilising to drive innovation. Therefore, AWS is primarily focused on providing you with a platform where you can:

  • As the need arises, implement new features, services, and capabilities in an extremely agile manner.
  • Try things out, fail quickly, and move on to the next item at very little expense; or, if the experiment is successful, scale it rapidly, as scaling is the second part of a data lake.
Agility & Scalability: AWS's platforms and tools for constructing data lakes are intrinsically scalable up to hundreds of petabytes and exabytes of data.

Capabilities: Because your use cases will be unique, you must possess a comprehensive set of capabilities that you can apply to the data to extract value. And as we go in the data-lake architectural journey, your abilities will grow, you'll want to do new things and get new insights, therefore you'll need a portfolio that won't limit you, so that you may discover (whether it's an AWS native service or a partner) the proper tool for the task.

Cost: Cost is another essential factor on which we must be intently focused. If you're successful, your data quantities will rise beyond your wildest dreams. When you're just starting out, your budget is unlikely to expand rapidly, thus we must be able to keep costs optimised and not increase the cost of the infrastructure as your demand increases.

Migration: This is unlikely to be an AWS-exclusive option. You will either have legacy equipment, on-premise data sources, or third-party data sources. Therefore, data transfer and integration with data-lake must be simple.

Faster Insights: Faster Insights provides you with a competitive advantage. accelerated time to market and enhanced capacity to provide new services When designing this data-lake, one of the essential pillars we need to emphasise is the expedited acquisition of insights.

So how do we define the data-lake at AWS? Since the earliest days of Hadoop, when a data lake was essentially HDFS, there have been several meanings of the term.


However, we wish to adopt a broader perspective, which we describe as encompassing both relational and non-relational data. The previous data lakes consisted solely of structured and semi-structured data from a number of sources. However, as we examine more and more novel use-cases, we begin to observe an increase in unstructured data kinds, such as video or radar data, LIDAR data, etc. 

It's not just about Hadoop or data warehousing; it's about a wide array of tools that can go into the data and perform precisely what you want with it.

Data-Lake on AWS So, bringing this down further, what does a data lake on AWS look like? S3 is the foundation's core component.


1. Data Ingestion So the first thing you have to do is get your data into S3.


AWS provides a multitude of data import solutions to facilitate this process. AWS Kinesis is a suite of technologies for ingesting streaming data, such as log data, streaming video data, etc. In addition, Amazon Web Services offers Kinesis Analytics, which allows you to analyse data as it streams in and make choices on it before it reaches the data lake.

AWS provides a Database Migration Service for integrating relational data from on-premises or cloud-based relational databases into the data lake.

AWS Storage Gateway may be used to integrate and migrate on-premises lab equipment that does not necessarily speak object storage or an analytics interface, but is accustomed to communicating with a file device, to the cloud. Lastly, you may already have an on-premises Hadoop cluster or a data warehouse. You might configure AWS Direct Connect to provide a direct network link between on-premises installations and AWS services.

You may have acquired a great deal of data in on-premise storage devices and wish to transfer it to the data lake. However, it is challenging to keep these two worlds in harmony. AWS therefore developed DataSync to facilitate this. It is a highly high-performance agent that you can install on your current on-premises storage, and it will automatically transmit and synchronise data with AWS. It is simple to use, has great speed, and allows you to automatically synchronise your on-premise settings where you may stage data with your AWS data lake.

Data ingestion is essential for making your data actionable, and you must select the appropriate method for each type of data.

2. Catalogue The second component, which is important to the construction of a data lake. Without a data catalogue, you have only a storage platform, not a true data lake. If you want to get insights from your data, you need to know what you have, what sort of data it is, what metadata is connected with it, and how various data sets relate to one another. Therefore, this is where AWS Glue comes in. Using this rich and adaptable data catalogue, you can quickly crawl data, categorise it, catalogue it, and gain insights from it.

3. User Interface After analysing the data and deriving insights from it, you must be able to communicate those findings to a wide range of consumers. You may do it directly, using analytic tools that speak SQL natively, or by putting API gateways in front of it and establishing a data consumption mechanism similar to that of a shopping cart. A range of AWS products such as API Gateway, Cognito, and AppSync may help you construct user interfaces on top of your data lake.

4. Security Managing security and governance is a fundamental component as well. It would not be an useable data-lake if it were not secure, because a data-lake is ultimately about combining several data silos in order to get more insights. When bringing all data and all users to a single platform, it is much easier to secure a large number of silos if there are many silos present. AWS provides a vast array of security and management capabilities, which we will explore in further detail, to assist you in doing so in a safe, resilient, and granular manner.

5. Analytics In the end, a data lake is all about extracting value from your data, and this boils down to the analytical tools you employ. AWS has a multitude of native tools for querying data in situ, such as Athena, Redshift Spectrum, SageMaker, etc., as well as a multitude of third-party tools that are far more performance and scalable for applications such as Spark or data warehousing.

AWS S3 Best Place for Data-Lake
  • AWS S3 was created with 99.999999999% (11 9s) of durability and high availability in mind. It is the second-oldest service offered by Amazon (about 13 years old) and has a vast size, including exabytes of data and trillions of items.
  • A large range of security compliance and auditing features are natural to S3 because security is one of its foundational components. As your data lake grows, you may wish to be able to govern these items at the level of each individual object. This might be to provide extremely granular security and access features or to implement highly intelligent data management strategies that will help you optimise expenses.
  • You will also need business insights into your data, which are distinct from analytic insights into your data. This entails analysing how your data is utilised by various consumers in order to charge them accordingly.
  • And lastly, capabilities for ingesting data. Before you can do anything with the data, you must first enter it. As a result, there are more options to import data into AWS S3 than virtually any other platform.
Additional References “Cloud Object Storage – Amazon S3 – Amazon Web Services.” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/s3. Accessed 21 May 2022.

“Amazon Athena - Serverless Interactive Query Service - Amazon Web Services.” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/athena. Accessed 21 May 2022.

“Fast NoSQL Key-Value Database – Amazon DynamoDB – Amazon Web Services.” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/dynamodb. Accessed 21 May 2022.

“Managed Open-Source Elasticsearch and OpenSearch Search and Log Analytics – Amazon OpenSearch Service – Amazon Web Services.” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/elasticsearch-service. Accessed 21 May 2022.

“Amazon Kinesis - Process & Analyze Streaming Data - Amazon Web Services.” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/kinesis. Accessed 21 May 2022.

“AWS Database Migration Service - Amazon Web Services.” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/dms. Accessed 21 May 2022.

“AWS Storage Gateway | Amazon Web Services.” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/storagegateway. Accessed 21 May 2022.

“Amazon QuickSight - Business Intelligence Service - Amazon Web Services.” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/quicksight. Accessed 21 May 2022.

“Amazon Cognito - Simple and Secure User Sign Up & Sign In | Amazon Web Services (AWS).” Amazon Web Services, Inc., aws.amazon.com, aws.amazon.com/cognito. Accessed 21 May 2022.

Read more
May 20, 2022

How to create a React app?


There are two ways to create React apps. First, you use npm (Node Package Manager) installed on your machine. If you’re using VS Code, you need to make sure you’ve configured your machine to run React code in VS code using npm. You will also need to setup a build environment for React that typically involved use of npm (node package manager), webpack, and Babel.


We can also create a React app without npm by directly importing Reactjs library in our HTML code. This is exactly what we will do here. Here are the steps required to create and run a React app without npm.


Step 1. Create a simple HTML file


Create a simple HTML file with the following HTML and save it as Index.html in any folder you have direct access to. We will open this html file direct in the browser.

<html>  
    <head>  
        <title>Let's React with npm</title>        
    </head>  
    <body>              
    </body>  
</html>


The above HTML file has a head, title, and body, the simplest form of an HTML file.


Step 2. Import React library


To make React work direct in an HTML document, we need to import the React library in HTML. React library is defined in two .js files. The files differ for development and production as you can see below.


The following script imports React library. Copy and paste this code in the tag of the HTML.


<!-- Load React Libraries -->  
<!-- Note: when deploying, replace "development.js" with "production.min.js". -->  
<script src="https://unpkg.com/react@17/umd/react.development.js" crossorigin></script>  
<script src="https://unpkg.com/react-dom@17/umd/react-dom.development.js" crossorigin></script>

The final HTML document now looks like this,

<html>  
    <head>  
        <title>Let's React with npm</title>  
        <!-- Load React Libraries -->  
        <!-- Note: when deploying, replace "development.js" with "production.min.js". -->  
        <script src="https://unpkg.com/react@17/umd/react.development.js" crossorigin></script>  
        <script src="https://unpkg.com/react-dom@17/umd/react-dom.development.js" crossorigin></script>                      
    </head>  
    <body>              
    </body>  
</html>



Step 3. Placeholder for React Component


Once React library is imported, we can use React syntaxes in our code. React uses components to represent UI. Think of a component as a user control that has code to represent visual interfaces and data.


To place a component on a page, we need a placeholder where that component will load. We add a

tag inside the tag of the page and give it an id=“root”.
This is the position where our React component will render
<body>  
   <div id="root"></div>       
</body>


Step 4. Create a React Component


As you may already know, the UI in React is created using components. A component in React is declared as a class. Here is a simple component that displays simple text “react without npm..”.


class HelloClass extends React.Component   
{  
        render()   
        {  
        return React.createElement('div', null, 'React without npm');  
      }  
}

In the above code, React.createElement is react function that creates an element, a

in this case and displays text inside the div.

Step 5. Call React Component


The final step in the process is to call the React component from JavaScript. The following code React.DOM.render() is responsible for rendering a React component. In this code, the first parameter is the component class name. The render method also takes a root element where the component is rendered. In out case, we render component inside the div id=‘root’. ReactDOM.render(
React.createElement(HelloClass, null, null),
document.getElementById('root')
);


Step 6. Complete code


Here is the complete HTML document.



<html>  
    <head>  
        <title>React's React</title>          
        <!-- Load React. -->  
        <!-- Note: when deploying, replace "development.js" with "production.min.js". -->  
        <script src="https://unpkg.com/react@17/umd/react.development.js" crossorigin></script>  
        <script src="https://unpkg.com/react-dom@17/umd/react-dom.development.js" crossorigin></script>  
        </head>  
        <body>  
        <div id="root"></div>      
       <!-- This is embedded JavaScript. You can even place this in separate .js file -->  
       <script>              
            window.onload = function()  
            {        
                class HelloClass extends React.Component   
                {  
                    render()   
                    {  
                        return React.createElement('div', null, 'React without npm..');  
                    }  
                }  
                ReactDOM.render(  
                    React.createElement(HelloClass, null, null),  
                    document.getElementById('root')  
                );  
            };          
        </script>  
    </body>  
</html>

Step 7. Run React code


To run the above code, create a text file in any text editor such as Notepad or Visual Studio Code, save it as Index.html or other name and open html file in a Web browser.










Read more
May 20, 2022
I Completed a course Cybersecurity Compliance Framework & System Administration Coursera..!!!!

https://coursera.org/share/6b0221779a94fe75ae7628d9955b3007
Read more
Loading...