Open Access Open Access  Restricted Access Subscription or Fee Access

Spam Filtering Security Evaluation Using MILR Classifier

Kunjali Pawar, Madhuri Patil

Abstract


Statistical spam filters are vulnerable to the adversarial attacks. An e-mail is classed as spam if a minimum of one instance within the corresponding bag is spam, and as legitimate if all the instances in its square measure legitimate. These systems based on the design methods and classical methods which do not take into account adversarial settings. In this paper, the security evaluation framework is proposed to avoid the detection in the Spam filtering with the help of Multiple Instance Logistic Regression i.e. MILR. In addition to define the model of Adversary with the guidelines for simulating attack scenarios The principal theme of the framework is to develop an enhanced model which anticipates the attacks by utilizing a data distribution.


Keywords


Adversary, Multiple Instance Learning, Multiple Instance Logistic Regression (MILR), Spam Filtering

Full Text:

PDF

References


Kunjali Pawar and Madhuri Patil, “A Review on Security Evaluation for Pattern Classifier against Attack,” International Journal of Computer Applications (IJCA) Proceedings on National Conference on Advances in Computing NCAC-2015(4): 19-22, December 2015, (ISSN: 0975-8887).

G.L. Wittel and S.F. Wu, “On Attacking Statistical Spam Filters,” Proc. First Conf. Email and Anti-Spam, 2004.

R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification. Wiley-Interscience Publication, 2000.

Kunjali Pawar and Madhuri Patil, “A Review on Security Evaluation for Pattern Classifier against Attack,” International Journal of Computer Applications (IJCA) Proceedings on National Conference on Advances in Computing NCAC-2015(4): 19-22, December 2015. (ISSN: 0975-8887).

M. Barreno, B. Nelson, A. Joseph, and J. Tygar, “The Security of Machine Learning, Machine Learning,” 81(2010), 121-148.

M. Barreno, B. Nelson, R. Sears, A.D. Joseph, and J.D. Tygar, “Can Machine Learning be Secure?,” Proc. ACM Symp. Information, Computer and Comm. Security (ASIACCS), 2006, 16-25.

Battista Biggio, Giorgio Fumera, and Fabio Roli, “Security Evaluation of Pattern Classifiers under Attack,” IEEE Transactions on Knowledge and Data Engineering, 26(2014), 4, 984-996.

P. Laskov and M. Kloft, “A Framework for Quantitative Security Analysis of Machine Learning,” Proc. Second ACM Workshop Security and Artificial Intelligence, pp. 1-4, 2009.

NIPS Workshop Machine Learning in Adversarial Environments for Computer Security, http://mls- nips07.first.fraunhofer.de/, 2007.

L. Huang, A.D. Joseph, B. Nelson, B. Rubinstein, and J.D. Tygar, “Adversarial Machine Learning,” Proc. Fourth ACM Workshop Artificial Intelligence and Security, pp. 43-57, 2011.

A. Kolcz and C.H. Teo, “FeatureWeighting for Improved Classifier Robustness”, Proc. Sixth Conf. Email and Anti-Spam, 2009.

D.B. Skillicorn, “Adversarial Knowledge Discovery,” IEEE Intelligent Systems, vol. 24, no. 6, Nov./Dec. 2009.

B. Nelson, M. Barreno, F.J. Chi, A.D. Joseph, B.I.P. Rubinstein, U. Saini, C. Sutton, J.D. Tygar, and K. Xia, “Exploiting Machine Learning to Subvert Your Spam Filter,” Proc. First Workshop Large- Scale Exploits and Emergent Threats, pp. 1-9, 2008.

S. Rizzi, “What-If Analysis,” Encyclopedia of Database Systems, pp. 3525-3529, Springer, 2009.

A.A. Cardenas and J.S. Baras, “Evaluation of Classifiers: Practical Considerations for Security Applications,” Proc. AAAI Workshop Evaluation Methods for Machine Learning, 2006.

Z. Jorgensen, Y. Zhou, and M. Inge, “A Multiple Instance Learning Strategy for Combating Good Word Attacks on Spam Filters,” J. Machine Learning Research, vol. 9, pp. 1115-1146, 2008.

A.M. Narasimhamurthy and L.I. Kuncheva, “A Framework for Generating Data to Simulate Changing Environments,” Proc. 25th Conf. Proc. the 25th IASTED Int’l Multi-Conf.: Artificial Intelligence and Applications, pp. 415-420, 2007.

B. Biggio, G. Fumera, and F. Roli, “Multiple Classifier Systems for Robust Classifier Design in Adversarial Environments,” Int’l J. Machine Learning and Cybernetics, vol. 1, no. 1, pp. 27-41, 2010.




DOI: http://dx.doi.org/10.36039/AA032016001.

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.