ARTIFICIAL INTELLIGENCE PROJECTS


Artificial intelligence is a science and technology based on disciplines such as Computer Science, Biology, Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the development of computer functions associated with human intelligence, such as reasoning, learning.
OUR COMPANY VALUES : Instead of Quality, commitment and success.
OUR CUSTOMERS are delighted with the business benefits of the Technofist software solutions.

IEEE 2018-2019 ARTIFICIAL INTELLIGENCE BASED PROJECTS

  • Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.Technofist provides best and latest IEEE projects on Artificial Intelligence for Final year Engineering students from Computer Science and Information Science field for carryout academic projects from Technofist with training ,complete document and realtime implemention of these projects. Artificial Intelligence project titles with their abstracts are displayed below. We provide abstract and complete explanation on synopsis. All the latest IEEE projects are available on Artificial Intelligence , titles and abstracts can be download from our website.

IEEE 2018-2019 Artificial Intelligence project list for MTech /BE / BTech / MCA / M.sc students in bangalore.

TEB001
PREDICT THE DIAGNOSIS OF HEART DISEASE PATIENTS USING CLASSIFICATION MINING TECHNIQUES

ABSTRACT The data mining can be referred as discovery of relationships in large databases automatically and in some cases it is used for predicting relationships based on the results discovered. Data mining plays an important role in various applications such as business organizations, e-commerce, health care industry, scientific and engineering. In the health care industry, the data mining is mainly used for Disease Prediction.the objective our works to predict the diagnosis of heart disease with reduced number of attributes. Here fourteen attributes involved in predicting heart disease. But fourteen attributes are reduced to six attributes by using Genetic algorithm. Subsequently three classifiers like Naive Bayes, Classification by Clustering and Decision Tree are used to predict the diagnosis of heart disease after the reduction of number of attributes.Contact:
 +91-9008001602
 080-40969981

TEB002
WEB IMAGE SEARCH RE-RANKING WITH CLICK-BASED SIMILARITY AND TYPICALITY

ABSTRACT In image search re-ranking, besides the well-known semantic gap, intent gap, which is the gap between the representation of users’ query/demand and the real intent of the users, is becoming a major problem restricting the development of image retrieval. To reduce human effects, in this paper, we use image click-through data, which can be viewed as the implicit feedback from users, to help overcome the intention gap, and further improve the image search performance. Generally, the hypothesis—visually similar images should be close in a ranking list—and the strategy—images with higher relevance should be ranked higher than others—are widely accepted. To obtain satisfying search results, thus, image similarity and the level of relevance typicality are determinate factors correspondingly. However, when measuring image similarity and typicality, conventional re-ranking approaches only consider visual information and initial ranks of images, while overlooking the influence of click-through data. This paper presents a novel re-ranking approach, named spectral clustering re-ranking with click-based similarity and typicality. First, to learn an appropriate similarity measurement, we propose click-based multi-feature similarity learning algorithm, which conducts metric learning based on click-based triplets selection, and integrates multiple features into a unified similarity space via multiple kernel learning. Then, based on the learnt click-based image similarity measure, we conduct spectral clustering to group visually and semantically similar images into same clusters, and get the final re-rank list by calculating click-based clusters typicality and within clusters click-based image typicality in descending order. Our experiments conducted on two real-world query-image data sets with diverse representative queries show that our proposed re-ranking approach can significantly improve initial search results, and outperform several existing re-ranking approaches.Contact:
 +91-9008001602
 080-40969981

TEB003
REMOTE MULTIMODAL BIOMETRIC IDENTIFICATION BASED ON THE FUSION OF THE IRIS AND THE FINGERPRINT

ABSTRACT With the development of various services through the Web and especially with the emergence of electronic commerce, all suppliers of products and services are providing considerable efforts to secure against all possible fraudulent intrusions. It appears that biometrics is the only method that can satisfy the requirements of remote identity in terms of relevance and reliability. In this paper, we propose a client-server network architecture for a remote multimodal biometric identification. As a matter of fact, we use two modalities, namely, the human iris and his fingerprint in order to strengthen the security, since the unimodal biometric systems cannot always be used reliably to perform recognition. However, the association of the information presented by the various modalities may allow a precise recognition of the identity. Concerning the fusion of these two modalities, we used a new approach at the scores level based on a classification method by the decision tree and a combination method by the sum. The results obtained confirm that the proposed method helped significantly to optimize the performance of the identification.Contact:
 +91-9008001602
 080-40969981

TEB004
TRUTH DISCOVERY IN CROWDSOURCED DETECTION OF SPATIAL EVENTS

ABSTRACT The ubiquity of smartphones has led to the emergence of mobile crowdsourcing tasks such as the detection of spatial events when smartphone users move around in their daily lives. However, the credibility of those detected events can be negatively impacted by unreliable participants with low-quality data. Consequently, a major challenge in quality control is to discover true events from diverse and noisy participants’ reports. This truth discovery problem is uniquely distinct from its online counterpart in that it involves uncertainties in both participants’ mobility and reliability. Decouplingthesetwotypesofuncertaintiesthroughlocationtrackingwill raise severe privacy and energy issues, whereas simply ignoring missing reports or treating them as negative reports will significantly degrade the accuracy of the discovered truth. In this paper, we propose a new method to tackle this truth discovery problem through principled probabilistic modeling. In particular, we integrate the modeling of location popularity, location visit indicators, truth of events and three-way participant reliability in a unified framework. The proposed model is thus capable of efficiently handlingvarioustypesofuncertaintiesandautomaticallydiscovering truth without any supervision or the need of location tracking. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art truth discovery approaches in the mobile crowdsourcing environment.Contact:
 +91-9008001602
 080-40969981

TEB005
FRAPPE: DETECTING MALICIOUS FACEBOOK APPLICATIONS

ABSTRACT Communication technology has completely occupied all the areas of applications. Last decade has however witnessed a drastic evolution in information and communication technology due to the introduction of social media network. Business growth is further achieved via these social media. Nevertheless, increase in the usage of online social networks (OSN) such as Facebook, twitter, Instagram etc has however led to the increase in privacy and security concerns. Third party applications are one of the many reasons for Facebook attractiveness. Regrettably, the users are unaware of detail that a lot of malicious Facebook applications provide on their profile.Contact:
 +91-9008001602
 080-40969981

TEB006
BUILDING AN INTRUSION DETECTION SYSTEM USING A FILTER-BASED FEATURE SELECTION ALGORITHM

ABSTRACTRedundant and irrelevant features in data have caused a long-term problem in network traffic classification. These features not only slow down the process of classification but also prevent a classifier from making accurate decisions, especially when coping with big data. In this paper, we propose a mutual information based algorithm that analytically selects the optimal feature for classification. This mutual information based feature selection algorithm can handle linearly and nonlinearly dependent data features. Its effectiveness is evaluated in the cases of network intrusion detection. Contact:
 +91-9008001602
 080-40969981

TEB007
SENTIMENT ANALYSIS OF TOP COLLEGES USING TWITTER DATA

ABSTRACT - In today’s world, opinions and reviews accessible to us are one of the most critical factors in formulating our views and influencing the success of a brand, product or service. With the advent and growth of social media in the world, stakeholders often take to expressing their opinions on popular social media, namely twitter. While Twitter data is extremely informative, it presents a challenge for analysis because of its humongous and disorganized nature. This paper is a thorough effort to dive into the novel domain of performing sentiment analysis of people’s opinions regarding top colleges in India. Besides taking additional preprocessing measures like the expansion of net lingo and removal of duplicate tweets Contact:
 +91-9008001602
 080-40969981

TEB008
PRACTICAL PRIVACY-PRESERVING MAPREDUCE BASED K-MEANS CLUSTERING OVER LARGE-SCALE DATASET

ABSTRACT - Clustering techniques have been widely adopted in many real world data analysis applications, such as customer behavior analysis, medical data Analysis, digital forensics, etc. With the explosion of data in today’s big data era, a major trend to handle a clustering over large-scale datasets is outsourcing it to HDFS platforms. This is because cloud computing offers not only reliable services with performance guarantees, but also savings on in-house IT infrastructures. However, as datasets used for clustering may contain sensitive information, e.g., patient health information, commercial data, and behavioral data, etc, directly outsourcing them to any Distributed servers inevitably raise privacy concerns. Contact:
 +91-9008001602
 080-40969981

TEB009
FIDOOP-DP: DATA PARTITIONING IN FREQUENT ITEMSET MINING ON HADOOP CLUSTERS

ABSTRACT - Traditional parallel algorithms for mining frequent itemsets aim to balance load by equally partitioning data among a group of computing nodes. We start this study by discovering a serious performance problem of the existing parallel Frequent Itemset Mining algorithms. Given a large dataset, data partitioning strategies in the existing solutions suffer high communication and mining overhead induced by redundant transactions transmitted among computing nodes. We address this problem by developing a data partitioning approach called FiDoop-DP using the MapReduce programming model. The overarching goal of FiDoop-DP is to boost the performance of parallel Frequent Itemset Mining on Hadoop clusters. Contact:
 +91-9008001602
 080-40969981

TEB010
SOCIALQ&A: AN ONLINE SOCIAL NETWORK BASED QUESTION AND ANSWER SYSTEM

ABSTRACT -Question and Answer (Q&A) systems play a vital role in our daily life for information and knowledge sharing. Users post questions and pick questions to answer in the system. Due to the rapidly growing user population and the number of questions, it is unlikely for a user to stumble upon a question by chance that (s) he can answer. Also, altruism does not encourage all users to provide answers, not to mention high quality answers with a short answer wait time. The primary objective of this paper is to improve the performance of Q&A systems by actively forwarding questions to users who are capable and willing to answer the questions. To this end, we have designed and implemented SocialQ&A, an online social network based Q&A system. Contact:
 +91-9008001602
 080-40969981

TEB011
SENTIMENT ANALYSIS OF TOP COLLEGES USING TWITTER DATA

ABSTRACT - In today’s world, opinions and reviews accessible to us are one of the most critical factors in formulating our views and influencing the success of a brand, product or service. With the advent and growth of social media in the world, stakeholders often take to expressing their opinions on popular social media, namely twitter. While Twitter data is extremely informative, it presents a challenge for analysis because of its humongous and disorganized nature. This paper is a thorough effort to dive into the novel domain of performing sentiment analysis of people’s opinions regarding top colleges in India. Besides taking additional preprocessing measures like the expansion of net lingo and removal of duplicate tweets Contact:
 +91-9008001602
 080-40969981

TEB012
A NOVEL RECOMMENDATION MODEL REGULARIZED WITH USER TRUST AND ITEM RATINGS

ABSTRACT -We propose TrustSVD, a trust-based matrix factorization technique for recommendations. TrustSVD integrates multiple information sources into the recommendation model in order to reduce the data sparsity and cold start problems and their degradation of recommendation performance. An analysis of social trust data from four real-world data sets suggests that not only the explicit but also the implicit influence of both ratings and trust should be taken into consideration in a recommendation model.Contact:
 +91-9008001602
 080-40969981

TEB013
CONNECTING SOCIAL MEDIA TO E-COMMERCE: COLD-START PRODUCT RECOMMENDATION USING MICROBLOGGING INFORMATION

ABSTRACTUnsupervised Cross-domain Sentiment Classification is the task of adapting a sentiment classifier trained on a particular domain (source domain), to a different domain (target domain), without requiring any labeled data for the target domain. By adapting an existing sentiment classifier to previously unseen target domains, we can avoid the cost for manual data annotation for the target domain. We model this problem as embedding learning, and construct three objective functions that capture: (a) distributional properties of pivots (i.e., common features that appear in both source and target domains), (b) label constraints in the source domain documents, and source and target domains. Contact:
 +91-9008001602
 080-40969981

TEB014
SECURE BIG DATA STORAGE AND SHARING SCHEME FOR CLOUD TENANTS

ABSTRACT - The Cloud is increasingly being used to store and process big data for its tenants and classical security mechanisms using encryption are neither sufficiently efficient nor suited to the task of protecting big data in the Cloud. In this paper, we present an alternative approach which divides big data into sequenced parts and stores them among multiple Cloud storage service providers. Instead of protecting the big data itself, the proposed scheme protects the mapping of the various data elements to each provider using a trapdoor function. Contact:
 +91-9008001602
 080-40969981

TEB015
USER-CENTRIC SIMILARITY SEARCH

ABSTRACT - User preferences play a significant role in market analysis. In the database literature there has been extensive work on query primitives, such as the well known top-k query that can be used for the ranking of products based on the preferences customers have expressed. Still, the fundamental operation that evaluates the similarity between products is typically done ignoring these preferences. Instead products are depicted in a feature space based on their attributes and similarity is computed via traditional distance metrics on that space. In this work we utilize the rankings of the products based on the opinions of their customers in order to map the products in a user-centric space where similarity calculations are performed. Contact:
 +91-9008001602
 080-40969981

TEB016
EFFICIENT PROCESSING OF SKYLINE QUERIES USING MAPREDUCE

ABSTRACT -The skyline operator has attracted considerable attention recently due to its broad applications. However, computing a skyline is challenging today since we have to deal with big data. For data-intensive applications, the MapReduce framework has been widely used recently. In this paper, we propose the efficient parallel algorithm SKY-MR+ for processing skyline queries using MapReduce. We first build a quadtree-based histogram for space partitioning by deciding whether to split each leaf node judiciously based on the benefit of splitting in terms of the estimated execution time. In addition, we apply the dominance power filtering method to effectively prune non-skyline points in advance. Contact:
 +91-9008001602
 080-40969981

TEB017 A METHOD OF WSN TO MONITOR AND CONTROL THE COLD CHAIN LOGISTICS AS PART OF THE IOT TECHNOLOGY ABSTRACT
TEB018 AUGUMENTED REALITY USING FLEX SENSORS ABSTRACT
TEB019 INTERACTIVE INTELLIGENT SHOPPING CART USING RFID AND ZIGBEE MODULES ABSTRACT
TEB020 IOT BASED SMART HOME AUTOMATION SYSTEM WITH AN PREDICTION ALGORITHM WITH AI ABSTRACT
TEB021 Sixth Sense Device ABSTRACT
TEB022 SMART CITIES FOR FUTURE DESIGN OF DATA ACQUISITION METHOD BASED ON (IOT) WITH ARTIFICIAL INTELLIGENCE ABSTRACT
TEB023 SMART ROOMS FOR POWER SAVING USING VIDEO PROCESSING ABSTRACT
TEB024 Truth Discovery in Crowdsourced Detection of Spatial Events ABSTRACT
CONTACT US

CONTACT US

For IEEE paper and full ABSTRACT

+91 9008001602


technofist.projects@gmail.com




ABOUT ARTIFICIAL INTELLIGENCE

Artificial intelligence is a science and technology based on disciplines such as Computer Science, Biology, Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the development of computer functions associated with human intelligence, such as reasoning, learning, and problem solving.
AI Technique
In the real world, the knowledge has some unwelcomed properties −

• Its volume is huge, next to unimaginable.

• It is not well-organized or well-formatted.

• It keeps changing constantly.

AI Technique is a manner to organize and use the knowledge efficiently in such a way that −

• It should be perceivable by the people who provide it.

• It should be easily modifiable to correct errors.

• It should be useful in many situations though it is incomplete or inaccurate.

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry. Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:
1) Knowledge
2) Reasoning
3) Problem solving
4) Perception
5) Learning
6) Planning
7) Ability to manipulate and move objects