Accepted Papers

  • Incorporating Synonyms into Snippet Based Recommendation System
    Ujwala M. Patil and Megha R. Sisode,R. C. Patel Institute of Technology, India
  • Recently, growth of internet has been increased for information retrieval though it is difficult to extract the relevant information in less time. Search engine sometime fails to understand user search intend. Query recommendation can be used to help user to state exactly their information need. Search engine can return appropriate result to meet users' information needs. There are various methods based on history of users and snippets to retrieve the information. But these methods fail to satisfy users need. Therefore in addition of history and snippets with synonyms will do better. Moreover user preferences can be used to build the user profile which will help in effective recommendations. Here for given query recommendation the synonyms are extracted on line. Synonym based method ranks the clicked URLs at the top of the result based on user profile. The performance of the system shows that the synonym based approach give better and effective recommendation for all queries as compared to previous methods.
  • Web Testing Application With PHP Automated Tool
    Iulia tefan and Ioan Ivan, Technical University, Romania
    The web applications development has experienced an explosive growth in variety and complexity during the past decade. Most web-based applications are modelled as three tier architecture, the client side experience remaining virtually unchanged, while server-side is updated. However, client-side architecture can change with unexpected results. Consequently, testing procedures should support continue improvements to pursue the current trends and technology. This paper presents an automated tool for testing client-side component of web applications. The testing data is extracted using a crawler. Adopting several procedures, the general aspect of the page is analysed (CSS regression testing). All of the content is tested, including links, images, forms, and scripts. The resulted test cases are automatically created, leaving the user with the option to decide over their usage.
  • Student's Presence and Movement Indicators in Hostels using RFID, GSM and Face Recognition.
    Asha C.Korwar and Divya T, Appa Institute of Engineering and Technology, India
    The effective monitoring of student staying in hostel is an essential activity. The proposed system uses RFID, GSM along with Face Recognition for monitoring and notifying the presence of students in the hostel. For this, the students ID (Identification) card is tagged with an RFID passive tag and their facial features are registered into the system. Face Recognition and RFID is used for attendance and is usually used at the main entrance. GSM modem is used for sending notification to the parents or guardians about the student's arrival and departure from the hostel. The location of a student in the campus and the attendance percentage can be known through the website or by sending an SMS (short message service) to the system. An alert SMS is sent to the student, parent and the hostel warden when the attendance percentage reaches a certain limit.
  • Enhanced Grid Scheduling Algorithm Using Tabu Search And Maco
    Aanchal Sewaiwar,Utkarsh Sharma and Manish Shrivastava, Rajiv Gandhi Proudyogiki Vishwavidyalaya, India
    The aim of grid task scheduling is to achieve high system throughput and less machine usage and to distribute various computing resources to applications. Unproductivity in grid computing scheme may occur when all jobs require or are assigned to the same resources. This paper proposed a Multiple Ant Colony Optimization Algorithm for task scheduling in grid computing with the concept of Tabu Search algorithm. The proposed MACO algorithm for job scheduling in the grid computing environment combines the techniques from Multiple Ant Colony System and Tabu Search algorithm. The algorithm focuses on local and global pheromone trail update.
  • A New Top-k Conditional XML Preference Queries
    Shaikhah Alhazmi and Mourad Ykhlef,King Saud University Kingdom of Saudi Arabia
    Preference querying technology is a very important issue in a variety of applications ranging from e-commerce to personalized search engines. Most of recent research works have been dedicated to this topic in the Artificial Intelligence and Database fields. Several formalisms allowing preference reasoning and specification have been proposed in the Artificial Intelligence domain. On the other hand, in the Database field the interest has been focused mainly in extending standard Structured Query Language (SQL) and also eXtensible Markup Language (XML) with preference facilities in order to provide personalized query answering. More precisely, the interest in the database context focuses on the notion of Top-k preference query and on the development of efficient methods for evaluating these queries. A Top-k preference query returns k data tuples which are the most preferred according to the user's preferences. Of course, Top-k preference query answering is closely dependent on the particular preference model underlying the semantics of the operators responsible for selecting the best tuples. In this paper, we consider the Conditional Preference queries (CP-queries) where preferences are specified by a set of rules expressed in a logical formalism. We introduce Top-k conditional preference queries (Top-k CP-queries), and the operators BestK-Match and Best-Match for evaluating these queries will be presented.
  • Performance Comparison Of Two Phase Face Recognition Algorithms Based In Frequency Domain
    Archana Sable and Girish Chowdhary, S.R.T.M University, India
    Today, the research in face recognition has focused on developing algorithms that have been proposed to recognize faces beyond variations in viewpoint, illumination, pose and expression. In this paper we have proposed face recognition algorithms, which works in two phase frequency domain i.e., TPFR-DCT-Mah, TPFR-DFT-Mah and TPFR-DWT-Mah. For TPFR-DWT-Mah ,the low-frequency subband coefficients i.e. LL subbands (after two-level wavelet decomposition) are used as DWT coefficients, TPFR-DCT-Mah uses the absolute values of DCT coefficients and TPFR-DFT-Mah uses DFT amplitude spectrums to represent the face image, i.e. the transformed image.The first step of the proposed algorithm seeks to represent the test sample as a transformed image and exploits the Mahalanobis distance of each training sample with the test sample to determine K "nearest neighbors" for the test sample. The second step represents the test sample as a linear combination of the determined K nearest neighbors and uses the representation result to perform classification. And finally classify the test sample into the class that has minimum deviation. The proposed algorithm assumes that the K "nearest neighbors" are from the same class as the test sample. The accuracy of the proposed algorithms i.e., TPFR-DCT-Mah, TPFR-DFT-Mah and TPFR-DWT-Mah has been identified and a comparison was performed between them in terms of recognition rates or Equal Error Rate(EER) or the Receiver Operating Curves(ROC).
  • Distributed Query Plan generation using Aggregation based Multi-Objective Genetic Algorithm
    Vikram Singh and Vikash Mishra, National Institute of Technology, India
    A major decision for the query processor of the database management system in centralized as well as distributed environments is how a query can produce the efficient result. A distributed database (DDB) helps to improve network performance, reliability, availability and modularity. The DDB consists of multiple, logically interrelated an autonomous database over a well structured computer network [CP84]. The performance of distributed database system is dependent on how efficiently a query plans are processed [G93]. The distribution, heterogeneity and autonomy are three important issues that may affect querying data in DDB [TVA11]. In DDB query processing requires data communication among sites, the cost of communication plays important & critical role on the selection of optimistic query plans. The selection of optimal query plans is the core activity of DQP; this optimistic task is NP-complete in nature. The data transmissions along with the local data processing constitute a distribution strategy for a query. This strategy is referred to as Distributed Query Processing (DQP) [CY84].
  • Implementation of three Block Matching Search Algorithms in H.264 on CUDA
    Renuka Joshi,Jancy James and Sagar Karwande,JSPM’s Rajarshi Shahu College Of Engineering,India
    Over the past two decades, improvisations in computing field indicated by fast networks, distributed systems and parallel computer architectures delineates that parallelism is the future of computing. H.264 is a video coding standard which is a bitstream structure and decoding method for video compression. In this paper, we combine the power of two i.e. GPGPU and H.264. Here, we use NVIDIA CUDA GPGPU to further increase the efficiency of H.264. We use three block matching search algorithms Full search, Diamond search and Hexagon based search. We implement all of them independently for the extensive analysis of Motion Estimation (ME). We discuss the components like inter and intra-prediction models, frame analysis and CUDA implementation which are required to make the system parallel on GPGPU. Motion estimation is computationally expensive process in video coding. Thus, this paper overcomes the limitation of time required for motion estimation in CPU by using GPGPU.
  • Effective User Navigation through Website Structure Improvement
    Thirupathi Duddilla and D. Vasumathi ,Jawaharlal Nehru Technological University,India
    The web applications over WWW (World Wide Web) should provide effective user navigation to provide better accessing of web pages to user in short time. Design of Well Structured websites is challenge. The reason is, Understanding of website design is different from web developers and users. User navigation is very important in web applications as used by many of users who expect quick access besides wonderful navigation which avoids ambiguity while traversing in websites. So this type of Navigation Problem is occurring. The main aim of this paper addresses that how to provide effective user navigation to websites with minimal changes. This paper defines a Mathematical Programming model to provide effective user navigation and it scales up very well. Here only small changes are going to be done to the Websites and this follows maintains cost effect and data organizing in websites. This Paper mainly provides the user to reach his target in short moments as per his behavior.
  • A Case Study On The Performance Of Feature Selection Algorithms
    Gurupadam Gundala and D Vasumathi, Jawaharlal Nehru Technological University,India
    The web applications development has experienced an explosive growth in variety and complexity during the past decade. Most web-based applications are modelled as three tier architecture, the client side experience remaining virtually unchanged, while server-side is updated. However, client-side architecture can change with unexpected results. Consequently, testing procedures should support continue improvements to pursue the current trends and technology. This paper presents an automated tool for testing client-side component of web applications. The testing data is extracted using a crawler. Adopting several procedures, the general aspect of the page is analysed (CSS regression testing). All of the content is tested, including links, images, forms, and scripts. The resulted test cases are automatically created, leaving the user with the option to decide over their usage.
  • Different IaaS Security Attributes and Comparative Study of Cloud Vendors
    Ramdas N. Khatake and Shridevi C. Karande,Maharashtra Institute of Technology,India
    Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources like networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. Despite such promises, cloud computing has not been adopted at a pace that was expected. Among various reasons that prevent its widespread adoption, the most serious issue that Cloud Computing faces concerns its inability to insure data Confidentiality, Integrity, Availability, Authenticity and Privacy. Infrastructure as a service (IaaS) model serves as the underlying basis for other delivery model, and lack of security in this layer will certainly affect the other delivery models, i.e. PaaS and SaaS that are built upon IaaS layer. This paper provide the idea of different service models, infrastructure as a service models different component and security attributes which ensures the security in IaaS components and comparison of different IaaS vendors with respect to security attributes.
  • Analysis Of Mapreduce Framework In Very Large Databases
    Seema Maitrey,C.K. Jha and Krishna Institute of Engineering and Technology, India
    Extremely large amount of data is being captured by today's organizations and is continue to increase. It becomes computationally inefficient to analyse such huge data. Research in data mining has addressed problem in discovering knowledge from these continuously growing large data sets. The amount of raw data available has been increasing at an exponential rate. The valuable information is hidden in large databases. Data mining has become an interesting area to extract the embedded precious information from them. For many years it has been found its root in all kinds of application areas. Thus, gave evolution to many data mining methods which started to get applied in several real life fields. But not all the methods possess the capability to deal with and handle the huge collection of data. In recent years, numbers of computation and data intensive scientific data analyses are established. To perform the large scale data mining analyses so as to meet the scalability and performance requirements of big data, several efficient parallel and concurrent algorithms got applied. A lot of parallel algorithms are put into action using different parallelization techniques. Among them, some common techniques used are threads, MPI, MapReduce etc. which yield different performance and usability characteristics. In computing rigorous problems, the MPI model works efficiently. But it is a complicated task to bring this model into the practical use. There is currently considerable enthusiasm around the MapReduce paradigm for large-scale data analysis. It is inspired by functional programming which allows expressing distributed computations on massive amounts of data. It is designed for large-scale data processing as it allows running on clusters of commodity hardware. A prominent parallel data processing tool MapReduce is gaining significant momentum from both industry and academia as the volume of data to analyze grows rapidly. In this paper, we are going to work around MapReduce, its advantages, disadvantages and how it can be used in integration with other technology.
  • TriBASim : a novel TriBA On Chip Network Simulator based on systemC
    GaoYuJin,Daniel Gakwaya,Jean Claude Gombaniro, Jean Pierre Niyigena,Beijing Institute of Technology,china
    In this paper, we develop a simulator for the Triplet Based (TriBA) Network On Chip processor architecture. TriBA(Triple-based Architecture) is a multiprocessor architecture whose basic idea is to bundle together the object programming basic philosophy and hardware multicore systems[4] .In TriBA ,nodes are connected in recursive triplets .TriBA network topology performance analysis have been carried out from different perspectives [1] and routing algorithms have been developed [2][3] but the architecture still lacks a simulator that the researcher can use to run simple and fast behavioral analysis on the architecture based on common parameters in the Network On Chip arena. We present TriBASim in this paper ,a simulator for TriBA ,based on systemc[6] .TriBASim will lessen the burden on researchers on TriBA ,by giving them something to just plug in desired parameters and have nodes and topology set up ready for analysis.
  • Priority Based RSA Cryptographic Technique
    Meenakshi Shankar and Akshaya.P,Sri Venkateswara College of Engineering,India
    The RSA algorithm is one of the most commonly used efficient cryptographic algorithms. It provides the required amount of confidentiality, data integrity and privacy. This paper integrates the RSA Algorithm with round-robin priority scheduling scheme in order to extend the level of security and reduce the effectiveness of intrusion. It aims at obtaining minimal overhead, increased throughput and privacy. In this method the user uses the RSA algorithm and generates the encrypted messages that are sorted priority-wise and then sent. The receiver, on receiving the messages decrypts them using the RSA algorithm according to their priority. This method reduces the risk of man-in-middle attacks and timing attacks as the encrypted and decrypted messages are further jumbled based on their priority. It also reduces the power monitoring attack risk if a very small amount of information is exchanged. It raises the bar on the standards of information security, ensuring more efficiency.
  • Score Based Recognition of 2D Images in Face Recognition Using FDPCA
    Suganthi T and Andrews S,Mahendra Arts and Science College,India
    High level face recognition system is capable of matching the two dimensional face images with the images having different pose and varied facial expressions from a dataset of face models. In such process, identifying the original images involves many strategic key functions. This process involves the extraction of the related image surface, key points and depth which compared with the dataset to retrieve the original image. In this paper, we have proposed a novel technique of extracting the image components based on a Fractional Distributor Principal Component Analysis (FDPCA) method, and combining the features of cumulative integration before matching. This new technique gives improvement in accuracy when compared to the conventional method where the PCA is applied as a whole on the picture.
  • Smart Meeting System: An Approach to Recognize Patterns Using Tree Based Mining
    Puja Kose and Pankaj Bharne,Shri Sant Gajanan Maharaj College of Engineering,India
    Mining Human Interaction in meetings is useful to identify how someone reacts in several things. Behavior represents the nature of the person and mining helps to analyze, how the person categorize their opinion in meeting. For this, study of linguistics knowledge is very important. Human interactions in meeting square measure categorized as propose, comment, acknowledgement, ask opinion, positive opinion and negative opinion. The sequence of human interactions is diagrammatically as a Tree. Tree structure is employed to represent the Human Interaction flow in meeting. Interaction flow helps to assure the chance of another style of interaction. Tree pattern mining and sub tree pattern mining algorithms square measure automatic to analyze the structure of the tree and to extract interaction flow patterns. The extracted patterns square measure interpreted from human interactions. The frequent patterns square measure used as associate classification tool to access a particular semantics, and that patterns square measure clustered to see the behavior of the person.
  • Segmentation Of Brain Tumor From MRI Images Using Unsupervised Artificial Bee Colony And FCM Clustering
    Neeraja R Menon ,M Karnan and R Sivakumar, Tamilnadu College Of Engineering ,India
    Tumor segmentation of MRI Brain images is still a challenging problem. This paper proposes a fast MRI Brain image segmentation method based on Artificial Bee Colony (ABC) algorithm. The value in a continuous gray scale interval is searched using threshold estimation. The optimal threshold value is searched with the help of ABC algorithm . In order to get an efficient fitness function for ABC algorithm, after the definition of grey number in Grey theory, the original image is decomposed by discrete wavelet transform. Then, a filtered image is produced by performing a noise reduction to the approximation image reconstructed with low-frequency coefficients. At the same time, a gradient image is reconstructed with some high-frequency coefficients. A co-occurrence matrix based on the filtered image and the gradient image is therefore constructed, and an improved two-dimensional grey entropy is defined to serve as the fitness function of ABC algorithm.Then Fuzzy-C Means algorithm is used for clustering the segmented image



Copyright ® CSEN 2014