Search published articles


Showing 2 results for Statistical Feature


Volume 12, Issue 6 (3-2021)
Abstract

Keyword extraction aims to extract words that are able to represent the corpus meaning. Keyword extraction has a crucial role in information retrieval, recommendation systems and corpora classification. In Persian language, keyword extraction is known as hard task due to Persian’s inherent complication. In this research work, we aim to address keyword extraction with a combination of statistical and Machine Learning as a novel approach to this problem. First the required preprocessing is applied to the corpora. Then three statistical methods and Bayesian classifier was utilized to the corpora to extract the keywords pattern. Also, a post processing methods was used to decrease the number of True Positive outputs. It should be pointed out that the built model can extract up to 20 keywords and they will be compared with keywords in the corresponding corpus. The evaluation results indicate that the proposed method, could extract keywords from scientific corpora (Specifically Thesis and Dissertations) with a good accuracy.

1. Introduction
Automated keyword extraction is the process of identifying document terms and phrases that can appropriately represent the subject of our writing. With the proliferation of digital documents today, extracting keywords manually can be impractical. Many applications such as auto-indexing, summarization, auto-classification, and text filtering can benefit from this process since the keywords provide a compact display of the text. Automated keyword generation can be broadly classified into two categories: keyword allocation and keyword extraction.
In keyword allocation, a set of potential keywords is selected from a set of controlled vocabularies, while keyword extraction examines the words in the text. Keyword extraction methods can be broadly classified into four groups: statistical approaches, linguistic approaches, machine learning approaches, and hybrid approaches.
 
2. Literature Review
working on Persian words is a big challenge for the paucity of sufficient research. The inadequacy of text pre-processing programs has made it more complex than the Latin language. Also, the presence of large dimensions of input data is one of the challenges that has always arisen in such researches and this problem becomes more apparent due to the variety of Persian written forms (Gandomkar, 2017, p. 233:256). In Moin Maedi's article (2015, p. 34:42) A method for extracting keywords in Persian language is presented. This article extracts keywords from each text separately and without seeing another text as training data.
In the article by Mohammad Razaghnouri (2017, P. 16:27) using the Word2Vec method and the TIF-IDIF frequency, they created a question and answer system in Persian, which is a new work due to the use of Word2Vec in Persian. However, with size reduction techniques and Word2Vec, this 72% success rate can be enhanced in the future.
3. Methodology
Accordingly, the current paper examines the integration of statistical keyword extraction methods with the Naive Bayes Classifier. Initially, we integrated input texts which are dissertations in Persian by using preprocessing (deletion of stop words, etymology, etc.) methods. Then, using the available statistical features, each word has been given a certain weight. Then, the valuable words of each text were selected and the proposed model was taught using the selected category, then the selected words were processed by the trained model, and at the end, the words extracted from the final model were evaluated using the keywords suggested by the authors themselves. Figure 1 depicts all the steps performed.
 
4. Results
Literature review shows that this is the first time that these combinations are used to extract Persian keywords, so that unlike other studies, each text is as a sample for category input and words as its properties, however, in this paper the words of each text input are categorized and words are extracted using statistical methods that are considered as features. The choice of keywords by the authors has always been a personal decision and people may not make a single decision to choose a set of words for a single text.
 
 
 
 
 
 
 
Figure 1
Proposed research framework for keyword extraction
 
 
 

Create unigrams
Rooting & normalization
 
 and bygrams 
 
 
 
 

        
 
 
 
 
 

   The current paper attempts to create a model and program with a new approach, due to the small number of input documents, which to extract keywords without dependence on the orientation of dissertations and the meaning of their words and only by using statistical features of words in each text. According to Tables 1 and 2, the developed model is able to extract a maximum of 20 keywords from each dissertation with an overall accuracy of 98.1%, in best condition which that is the use of a maximum frequency feature. The keywords written in each dissertation with 84% and 98% accuracy, correspond to one-word and two-word expressions, respectively.
Table 1
 Evaluation criteria for Bayesian outputs in different states of statistical Features
Precision F1-Score Recall Accuracy Statistical Features
0.98 0.98 0.98 97.2% Tf_Idf, Most Frequent, Tf_Isf
0.99 0.99 0.982 98.1% Most Frequent
0.99 0.94 0.91 99.8% Tf_Idf, Tf_Isf
 
Table 2
 Evaluation of post-processing test data for outputs that have been categorized by keyword
 
Number of keywords that selected by writers Number of words Precision F1-Score Recall Statistical Features Step
42 210 0.2 0.323 0.84 Most Frequent Uni-Grams
34 158 0.8 0.888 0.98 Most Frequent By-Grams
 
.
 

Morteza Zadkarami, Mehdi Shahbazian, Karim Salahshoor,
Volume 16, Issue 9 (11-2016)
Abstract

Oil pipeline leakages, if not properly treated, can result in huge losses. The first step in tackling these leakages is to diagnose their location. This paper employs a data-driven Fault Detection and Isolation (FDI) system not only to detect the occurrence and location of a leakage fault, but also to estimate its severity (size) with extreme accuracy. In the present study, the Golkhari-Binak pipeline, located in southern Iran, is modeled in the OLGA software. The data used to train the data-driven FDI system is acquired by this model. Different leakage scenarios are applied to the pipeline model; then, the corresponding inlet pressure and outlet flow rates are recorded as the training data. The time-domain data are transformed into the wavelet domain; then, the statistical features of the data are extracted from both the wavelet and the time domains. Each of these features are then fed into a Multi-Layer Perceptron Neural Network (MLPNN) which functions as the FDI system. The results show that the system with the wavelet-based statistical features outperforms that of the time-domain based features. The proposed FDI system is also able to diagnose the leakage location and severity with a low False Alarm Rate (FAR) and a high Correct Classification Rate (CCR).

Page 1 from 1