Monday, 4 June 2018


FOG COMPUTING


Fog computing is a term invented by Cisco. It is extension of cloud computing which provides cloud services to the edge of a network. Fog computing is also known as Edge Computing or fogging. Fog computing provides computing, storage and networking services between end devices and cloud computing data centers.

The areas of concerns in cloud computing architecture are high latency and QoS (Quality of Service). The goal of fogging is to improve efficiency and reduce latency time. It also reduces the amount of data to be transported to the cloud for processing, analysis and storage. It not only improves efficiency but it may also be used for security and compliance reasons.

The metaphor fog comes from the meteorological term for a cloud close to the ground, just as fog concentrates on the edge of the network. The OpenFog Consortium was founded in November 2015 by members from Cisco, Dell, Intel, Microsoft, ARM and Princeton University. Its mission is to develop an open reference architecture and convey the business value of fog computing.

The data is generated and collected by edge devices and sensors in the network. They don't have the compute and storage resources to perform advanced analytics and machine-learning tasks. The servers at cloud with high storage capacity have the power to do computation, but they are often too far away to process the data and respond in a timely manner.  It increases latency time as well as degrades overall performance of the network. In addition to this, having sending raw data to the cloud over the internet can have privacy, security and legal implications.

In a fog environment, the processing takes place on a smart device, or in a smart router or gateway, thus reducing the amount of data sent to the cloud. It is important to note that fog networking complements not replaces cloud computing. Fogging allows for short-term analytics at the edge, and the cloud performs resource-intensive, longer-term analytics.

The major area of application of Fog computing is IoT. Because For many IoT applications where cloud computing is not feasible, fog computing can be used. Fog computing addresses the needs of IoT and industrial IoT. The smart sensors and IoT devices generate immense amount of data. It would be costly and time-consuming to send this huge amount of data to the cloud for processing and analysis. Fog computing reduces the bandwidth requirement and reduces the back-and-forth communication between sensors and the cloud which can negatively affect IoT performance. Fog computing is very effective in the adhoc network environment with intermittent connectivity and low bandwidth medium.


Pros and Cons

Pros
Cons

Reduces amount of data sent to the cloud
Physical location takes anytime,anywhere, anydata benefit of the cloud

Conserves network bandwidth
Security issues: IP address spoofing

Improves system response time
Privacy issues

Supports mobility
Trust and authentication concerns

Minimizes network and Internet latency
Wireless network security concerns





Ms. Nisha Wadhawan
Assistant Professor
Dept. of Management Studies



Digital Marketing

Digital marketing is an umbrella term for all online marketing efforts. Businesses leverage digital channels such as Google search, social media, email, and their websites to connect with their current and prospective customers. The reality is, people spend twice as much time online as they used to 12 years ago. And definitely, the way people shop and buy really has changed; meaning offline marketing isn’t as effective as it used to be.
Marketing has always been about connecting with our audience in the right place and at the right time. In today’s world this means that we need to meet them where they are already spending time on the internet. Unlike most offline marketing efforts, digital marketing allows marketers to see accurate results in real time. If we ever put an advert in a newspaper, we will know how difficult it is to estimate how many people actually flipped to that page and paid attention to our advertisement. There’s no surefire way to know if that ad was responsible for any sales at all.

Bhawna Dhruv
Assistant Professor
Dept of Information Technology



Tuesday, 29 May 2018


What ITC did right in its Crisis Management


Crisis management is akin to ‘firefighting’ in the Corporate arena. It is when you try to mitigate instant damage to your company’s reputation built over the years. If done right, it can be just the ticket in a PR professional’s  career. If handled sloppishly, it can do long term harm to a brand. For instance, Nestle Maggi’s sales dropped 20 per cent year on year because of the lead in noodle scandal.

Earlier this year, an old fake video spread like wildfire on social media claiming that the ITC’s Aashirvaad brand had mixed plastic in its flour. The video got the multi-billion Corporate’s crisis management team in full swing. It seemed that ITC had learnt its lessons from Nestle's Maggi crisis.

What ITC did right:

  1. Quick Mover: According to the company, the first video claiming presence of plastic in Aashirvaad Atta appeared in July 2017. It was telecast on a local TV channel in Siliguri, West Bengal. The Company officials quickly took notice of the matter and contacted the channel, asking them to withdraw it.
  2. Legal Action: As soon as the video got viral on WhatsApp and Facebook, the Company lodged police complaints in three cities including Kolkata, Hyderabad and Delhi. They also moved the City Civil Court in Bengaluru and won a restraining order against anyone circulating such videos.
  3. Widespread Media Coverage: Through its legal course of action, the Company succeeded in garnering enough media attention with an almost daily coverage in newspapers.
  4. Advertising Campaign to win trust of consumers: ITC launched an advertising campaign on TV countering the allegations (https://youtu.be/YBA_tXKsDB0). With a rational appeal, it tried to establish that what is being called as plastic is in fact a wheat protein known as Gluten which is naturally found in wheat flour. To build trust and a two-way communication the ad concluded by providing a tollfree number urging consumers to get in touch with the brand respresentatives regarding any complaints or clarifications.
  5. Putting up a strong case online: On the home page of its website - http://www.aashirvaad.com/, the Company posted detailed Q&A that dispelled the rumours and myths around the controversy. It also posted videos from experts as well as sample test reports from FSSAI Notified External Labs.
  6. Being on the right side of the Food Regulators: Hemant Malik, ITC Divisional Chief Executive (Foods) was quoted in newspapers stating that “even FSSAI mandates that wheat flour should contain a minimum of six per cent gluten, which is wheat protein, on a dry weight basis. Indian wheat typically has 9-10 per cent gluten. We urge our consumers not to be misled by false and malicious videos. ” ITC openly made claims that were in line with Indian food regulators. Unlike Nestle Maggi they appeared to be right by being on the right side of the regulators.




MsChhavi Bakaria
Assistant Professor
Department of Communication Studies

Artificial Intelligence: The Power of Future


AI (Artificial Intelligence) is the field of computer science which addresses the ways wherein computers could be made to do cognitive functions ascribed to humans. It is the simulation of human intelligence processes by machines, especially computer systems. When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. Artificial intelligence and increasingly complex algorithms currently influence our lives and our civilization more than ever. Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has a potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial. The areas of AI application are diverse and the possibilities extensive: in particular, because of improvements in computer hardware, certain AI algorithms already surpass the capacities of human experts today. As Artificial Intelligence capacity improves, its field of application will grow further. In concrete terms, it is likely that the relevant algorithms will start optimizing themselves to an ever-greater degree may be even reaching superhuman levels of intelligence. This technological progress is likely to present us with historically unprecedented ethical challenges. Many experts believe that alongside global opportunities, AI poses global risks, which will be greater than, say, the risks of nuclear technology—which in any case have historically been underestimated. Furthermore, scientific risk analysis suggests that high potential damages should be taken very seriously even if the probability of their occurrence are low. Furthermore, science fiction often portray AI as robots with human – like characteristics, AI can encompass anything like autonomous weapons. AI may be dangerous as autonomous weapons are artificial intelligence systems that are programmed to do wrong things. In the hands of criminals, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being attacked by the enemy, these weapons would be designed to be extremely simple to simply “turn off,” so humans could possibly take control of such a situation. This risk is one that’s present even with narrow AI but grows as levels of Artificial Intelligence and autonomy increase. Artificial intelligence will drastically change the future of this world as we are trying to create self-learning machines.

The future of this world is in our hand, it depends on us that how we evolve technologies. Artificial intelligence can be asset or can be dangerous depends upon the way we are evolving machines.



Mr. Deepak Sharma
Assistant Professor
Department of Information Technology

Tuesday, 15 May 2018


Smart Dust the future of IOT



Smart Dust is combination of tiny wireless sensors of few millimeter in size. These tiny sensors are deployed in the air and they are very hard to detect. The tiny sensors works in the group of hundreds or more to monitor light, magnetism , temperature, vibration, and chemicals. They work on radio frequency identification technology.  Smart Dust is emerging as the future of IOT because the general principle of IOT is to use sensors everywhere to monitor and transmit the data back to the database or computation center for analysis.

The drawback in IOT is that sensors have to be place on different areas and  these sensors are traditional sensors which require external power backup and are big in size. Further it is difficult to place traditional sensors inside pipelines, unreachable places or where secrecy is required.  In future this may not be an issue as smart dust can be used anywhere. The proposal of Smart Dust was introduced in 1992 at DARPA for military applications. The proposal was to build wireless sensor nodes with a volume of one cubic millimeter.

Components of Smart Dust

  • Different type of Sensors
  • Optical Transmission for device to device communication and device to base station communication
  • Signal Processing unit and control circuitry
  • Power source in form of solar cells.
  • TinyOS for working with low power sensors. The alternate to TinyOS is Ardunio which can be used to control hardware. The advantage with TinyOS is that it is designed specially to work low power sensors over the wireless communication. 





A single smart dust is called mote. A single smart dust mote consist of above mentioned components. The challenge with smart dust is to package all the components into one single entity. While advancement in the field of digital circuitry and wireless communication would made smart dust a successful technology in near future in the field of military applications, healthcare, agriculture sector and forest protection. 



Mr. Vijay Gupta
Assistant Professor
Department of Information Technology



Monday, 7 May 2018


Crypto Currency in India

Crypto Currency is a digital asset that works as a medium of exchange. It uses cryptography to secure its transactions, to control the creation of additional units, and to verify the transfer of assets. It is designed to be secure and, in many cases, anonymous. It is associated with the internet and uses cryptography, the process of converting legible information into an almost uncrackable code, to track purchases and transfers.
Its genesis stems from the discipline of mathematical theory and computer science to become a mode of online money exchange. It uses cryptography, networking, open-source software and block chain technology
These are virtual currencies which use decentralized control. This makes them different from centralized electronic money and the central banking system.
The  first virtual currency system,  Bitcoin was created in 2009 “ Satoshi Nakamoto”. However it is not known whether "Satoshi Nakamoto" is real or a pseudonym, or whether it represents one person or a group. Thereafter a number of crypto currencies have been created across the world. The rationale was to shift power and control from institutions to individuals. According to Andresen, a software developer and entrepreneur based in Amherst, Mass, "Bitcoin is designed to bring us back to a decentralized currency of the people."
The number of crypto currencies available over the internet  in April 2018 was over 1565. By market capitalization, during April  2018,  Bitcoin  had  the largest blockchain network, followed by EthereumRippleBitcoin CashLitecoin, and EOS.

Do we need crypto currency?

The concept of crypto currency is based on the fact that it shifts money control from the state to the individual. This places a lot of responsibility in the hands of the individual so as to use it judiciously without compromising the good of the society at large. It is a well known fact that there may be many vested interests that would only be concerned about amassing wealth at the expense of the gullible folks.
An illustration to the above point was the incident in the month of April when the Delhi cops busted a crypto currency minting unit at Dehradun. The gang had cheated people to the tune of more than a hundred crores and had then vanished with the loot.
In this regard, the RBI (the central banking agency of the country), has given the lenders a period of three months to sever ties with crypto currency traders and exchanges. It has barred regulated lenders from facilitating crypto currency traders.
In spite of this, trading volumes have risen. The experts have offered the explanation  of this phenomena as follows- procuring it now  would enable investors to convert rupees into crypto currency, which could  be later swapped for other coins via private trading platforms when the rules stipulated by the RBI get enforced. It is astonishing that many investors are still hoping that government would frame suitable policies to regulate it rather than ban it outright thereby moderating the stand taken by the RBI.
Our economy as envisaged in our constitution is a ‘socialist economy’ which aims at an equitable distribution of resources. Legalizing crypto currency would mean going against the principles of the constitution. Banning it outright will also not serve the purpose. Hence there has to be a via media which regulates its use.
Notable in this regard is the stand taken by the US Government which has legalized the use of crypto currency with the condition that –
The U.S. Congress may have the power to prohibit VCs under its power to “regulate Commerce with foreign Nations, and among the several States” and under its exclusive constitutional power “to coin Money” and “regulate the Value thereof”. In a decision taken in November 2014, the Court upheld the power of regulators to prosecute a defendant who “designed, created and minted coins called ‘Liberty Dollars,’ coins ‘in resemblance or in similitude’ (or made to look like) of U.S. coins.”
According to Gareth Murphy, a senior central banking officer, US, “widespread use (of crypto currency) would also make it more difficult for statistical agencies to gather data on economic activity, which are used by governments to steer the economy”. He cautioned that virtual currencies pose a new challenge to central banks’ control over the important functions of monetary and exchange rate policy.

Hence we expect some farsighted decisions from the panel that has on its board members from the RBI, the finance ministry and market regulator SEBI.

Ms Suchitra Srivastava
Associate Professor
Department of Management Studies





Blog on Exposure Triangle


Photography is defined as drawing with light with the help of a light tight box called camera. To click a picture we need to provide a sufficient light to the camera so that it can identify the subject to be clicked.

To get that amount of light the camera needs to click a good picture we should understand the concept of exposure triangle, because the overexposed and underexposed images are not considered good images. 


    
   Underexposed


Exposure triangle is the combination of aperture, shutter speed and ISO. We must balance these three variables to get a perfectly exposed picture. These three components are parts of camera which depends on each other. So an adjustment in one will require adjustment in others.



   Correctly exposed


Aperture

Aperture is the hole in the lens through which light enters into the camera.  Size of this hole can be increased or decreased as per the lighting conditions. A wide aperture means more light will enter into the camera while a narrow aperture will allow less light. 

Aperture is denoted by f number which is universal. The usual numerical values for the f-stop are 1.4, 1.8, 2.0., 2.8, 3.6, 4, 5.6, 8, 11, 16 and 22. If we are shooting in low light conditions we need to lower down the f number to 2.0 or 1.8 while for situations where we have more light we need to increase the f number to 16 or 22.

Shutter speed

Shutter speed decides the time for which shutter remains open and thus for how the sensor is exposed to light. Faster shutter speed means less time while slow shutter speed means more time for which light is inside the camera.

Shutter speed is measured in seconds or fraction of seconds like ½, 1/4, 1/100s, 1/250 and so on. 1/100s, shutter speed means that shutter will open for one hundredth part of a second while 1/1000 shutter speed means that shutter will open for one thousandth part of a second. We keep high shutter speeds when we want to freeze the action or capture the motion.

ISO

ISO is the sensitivity of the film in case of film camera while image sensor in digital camera. It means how sensitive your film or image sensor is towards light. High sensitive image sensor needs less amount of light to get exposed or click the image while less sensitive image sensor needs more light to get exposed.

ISO value is measured in numbers. Lower the number, the lower the sensitivity to light. Higher values mean it is more sensitive to light. Depending on the camera, the lowest value stars from 50 can go up to 6400. Photographers who want more saturation and less noise and more details in the picture should go for ISO 100 or 200.

Thus by combining the ISO, aperture and shutter speed gives a correct exposure value for a particular setting. One thing you have to keep in mind is that if any one of the elements is changed or adjusted then we also need to change the other two to get the correct exposure otherwise your image would be too bright or too dark or noisy.


Ms. Sanyogita Choudhary
Assistant Professor
Department of Communication studies