Skip to main content

Table 1 Results of the sliding window method (SWM)

From: Application of the sliding window method and Mask-RCNN method to nuclear recognition in oral cytology

No

Activation function

Epoch

No of layers

Dropout

Value of the training loss function

Percentage of correct answers for the training group

Value of the loss function of the test group

Percentage of correct answers in the test group

1

ReLU

5

18

0.25

0.3356

0.8668

0.3266

0.8654

2

ReLU

20

18

0.25

0.0891

0.9562

0.2104

0.9314

3

ReLU

50

18

0.25

0.0526

0.9851

0.6082

0.9111

4

ReLU

5

24

0.25

0.351

0.8524

0.3188

0.8705

5

ReLU

20

24

0.25

0.0798

0.9251

0.1958

0.9314

6

ReLU

50

24

0.25

0.0628

0.9982

0.4614

0.8908

7

ReLU

5

18

0.5

0.3752

0.8226

0.3225

0.8705

8

ReLU

20

18

0.5

0.2151

0.9198

0.2514

0.9213

9

Sigmoid

20

18

0.25

0.7456

0.4659

0.7285

0.4785

  1. The results obtained using the SWM and convolutional neural network (CNN) methods are shown. To improve the accuracy of the loss function values and correctness of the answer rate, several dropouts were attempted to increase the number of epochs and avoid overlearning. We also deepened the network by increasing the number of convolution, pooling, and activation functions
  2. The CNN with the highest number of correct answers in the training data was No.6 (99.8%), whereas the percentage of correct answers in the test data was 89.1%. This is thought to be due to overlearning. The CNN with the highest accuracy and without overlearning was No. 2 (93.1% correct answers on the test data)