Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

public static double[][] selectKernelSet(double[][] trainingSet, double kernelPr

ID: 3646846 • Letter: P

Question

public static double[][] selectKernelSet(double[][] trainingSet, double kernelProportion, Random rng){

//Create a new 2D array for the kernel set.
//number of rows: kernelProportion times number of rows in trainingSet
// cast the result of this calculation to an int
//number of cols: same as the number of cols in trainingSet


//Each row of the kernel set is assigned a random row of the trainingSet
//using the random number generator to generate integers in the range
//0 up to but not including the length of the trainingSet.
//Note it is okay if the same example is selected more than once.


//Replace the line below to return your kernel set array.
return null;
}

Can anyone help me write this script? Thank you

Explanation / Answer

import java.util.*; public class NeuralNetwork { /** * Train and then test the learning network (radial basis function network) * using examples and other data from the Dataset class. * * @param args */ public static void main(String[] args){ //**************************************************** //* CHANGE TEST BELOW TO true TO TEST YOUR CODE * //* (see the Testing Section of the assignment page) * //**************************************************** final boolean TEST = false; //*************************************************** //* DO NOT CHANGE THE CODE IN THE MAIN METHOD BELOW * //*************************************************** if(TEST){ Dataset.test(); } Random rng = new Random(Dataset.seed); double[][] kernelSet = selectKernelSet(Dataset.trainingSet, Dataset.kernelProportion, rng); double variance = findMaxDistance(kernelSet) * Dataset.varianceMaxDistanceProportion; double[][] weights = trainNetwork(Dataset.trainingSet, kernelSet, variance, Dataset.nClassifications, Dataset.learningRate, Dataset.nIterations, rng); double accuracy = testNetwork(kernelSet, variance, weights, Dataset.testSet); System.out.println("The learning network accuracy is: " + accuracy); } /** * Selects some of the training set examples to also be used in the kernel * set. * * @param trainingSet An array of examples from which the kernel set is * selected. * @param kernelProportion The proportion of the trainingSet to also be used * as the kernelSet. * @param rng A random number generator. * @return The array of examples selected as the kernel set. */ public static double[][] selectKernelSet(double[][] trainingSet, double kernelProportion, Random rng){ //Create a new 2D array for the kernel set. //number of rows: kernelProportion times number of rows in trainingSet // cast the result of this calculation to an int //number of cols: same as the number of cols in trainingSet //Each row of the kernel set is assigned a random row of the trainingSet //using the random number generator to generate integers in the range //0 up to but not including the length of the trainingSet. //Note it is okay if the same example is selected more than once. //Replace the line below to return your kernel set array. return null; } /** * Finds the maximum distance between any two examples in the kernelSet. * * @param kernelSet An array of examples used as the kernel set. * @return The maximum distance between any two examples in the kernel set. */ public static double findMaxDistance(double[][] kernelSet){ //Calculate the distance between every pair of examples //and store the maximum distance found. //Replace the line below to return the maximum distance found. return 0; } /** * Trains the learning network by calculating the weights for the perceptron * stage of the network. * * @param trainingSet An array of examples used to train the network. * @param kernelSet An array of examples used as the kernel set. * @param variance The width parameter of the Gaussian function. * @param nClassifications The number of classifications in trainingSet. * @param learningRate A multiplicative constant in the perceptron training rule * @param nIterations The number of iterations over the trainingSet. * @param rng A random number generator. * @return A 2D array with each row representing the weights that were * learned by a particular perceptron in the network. */ public static double[][] trainNetwork(double[][] trainingSet, double[][] kernelSet, double variance, int nClassifications, double learningRate, int nIterations, Random rng){ //Create a new 2D array of weights //number of rows: the number of classifications //number of cols: one larger than the number of rows in the KernelSet //Initialize your new weights array with random doubles in the range //-1.0 inclusive to +1.0 exclusive //Create a new 2D array of inputs for the perceptrons //number of rows: the length of the training set //number of cols: same as the number of cols in your weights array //Assign each row of your new inputs array with the row returned //by the calcPerceptronInput called on each example in the training set. //Main training algorithm. //Repeat for the given number of iterations // Repeat for each example in the training set // Set perceptron_classification to the result from classifyInput using // this training set example's corresponding perceptron input (determined above) // Repeat for each possible_classification: 0 to nClassifications - 1 // Determine the target value (1 or -1): target is 1 if possible_classification // is equal to the actual classification of the example. Otherwise, it is -1. // Determine the output value: output is 1 if the perceptron_classification // is equal to the possible classification. Otherwise, it is -1. // Repeat for each weight of the possible_classification // Increase the weight by the learning rate times (the target minus the output) // times the perceptron input value for this training set example and weight //Replace the line below to return your weights array. return null; } /** * Calculates the distances between an example and each example in the * kernel set, and then applies a Gaussian function to each one of these * distances. The returned array also has a single additional value of 1 * at the end of the array, which is used later when calculating a weighted * sum. * * @param example A 1D array of values corresponding to the features of an * example. The last value is the correct classification for the example. * @param kernelSet An array of examples that are compared to an example * input into the learning network * @param variance The width parameter of the Gaussian function. * @return A 1D array of values to be input into the perceptron stage of * the network. */ public static double[] calcPerceptronInput(double[] example, double[][] kernelSet, double variance){ //Create a new 1D array for an input to the perceptron stage //length: one larger than the number of rows in the kernelSet //Assign to each of the first kernelSet.length elements of your input array the //distance calculated between example and the corresponding examples in the //kernelSet. //Update each of those values in your input array by applying the Gaussian //function to each value. //Assign 1 to the last element in your input array. //(The last weight in a weight array does not correspond to an example from the //kernel set, but just increases or decreases the weighted sum by some constant //amount, just like the b value does in y = mx + b. Because we will later perform //a weighted sum with the perceptron input and a weight array, for consistency, //we need to multiply that last weight by 1.) //Replace the line below to return your input array. return null; } /** * Calculates the distance (Euclidean) between two examples using the * features of each example. NOTE: the last value of the two examples are * NOT used in this calculation. * * @param example1 A 1D array of values corresponding to the features of an * example. The last value is the correct classification for the example. * @param example2 A 1D array of values corresponding to the features of an * example. The last value is the correct classification for the example. * @return The distance between the features of two examples. */ public static double calcDistance(double[] example1, double[] example2){ //For each corresponding feature: // subtract the value of example 2 from example 1 // square the result of the difference // add this squared difference to a running total //Replace the line below to return the square root of the running total. return 0; } /** * Calculates: e ^ (-(value ^ 2) / (2 * variance)), which is a mean-centered * Gaussian function. * * @param value The value to which the Gaussian function is applied. * @param variance The width parameter of the Gaussian function. * @return The result of applying the Gaussian function. */ public static double applyGaussian(double value, double variance){ //Hint: Use the methods in the Math class. //Replace the line below to return the result. return 0; } /** * Determines the classification for a given perceptron input using the * learning network. It first performs a weighted sum of the perceptron * input and each row of the weights 2D array. It then finds the index of * the largest weighted sum and returns that index as the learning network's * classification for that input. * * @param perceptronInput A 1D array of values to be input into the * perceptron stage of the network. * @param weights A 2D array with each row representing the weights used * by a particular perceptron in the network. * @return An integer between 0 and number of classes - 1, representing the * learning network's classification for that input. */ public static int classifyInput(double[] perceptronInput, double[][] weights){ //Create a new 1D array for output. There is one output for each output node, //and the number of output nodes equals the number of rows in the weights array. //Assign each element of output the result of doing a weighted sum //of the corresponding row of the weights array and the perceptronInput. //Replace the line below to return the index of the maximum value in output. return 0; } /** * Calculates and returns the sum of the products of all the corresponding * pairs of values in the weights and inputs arrays. * * Example: If the weights array contains the values {4, 5, 10} and the inputs * array contains the values {3, 6, 1}, then the result should be equal to * (4*3)+(5*6)+(10*1). * * @param weights An array of weights. * @param inputs An array of input values. * @return The weighted sum of the two arrays. */ public static double calcWeightedSum(double[] weights, double[] inputs){ //Replace the line below to return result of the weighted sum. return 0; } /** * Finds the index of the largest value in the array. If more than one * index has the same largest value, return the index of the first element * found with the largest value. * * @param values An array of values. * @return The index of the largest value in the array. */ public static int findMaxIndex(double[] values){ //Replace the line below to return the index of the largest value found. return 0; } /** * Calculates the accuracy of the learning network on a given test set. * It classifies each examples in the test set and compares that * classification to the example's actual classification, keeping track of * how many times the classification was correct. It returns the accuracy * as the proportion of examples from the test set that were classified * correctly. * * @param kernelSet An array of examples used as the kernel set. * @param variance The width parameter of the Gaussian function. * @param weights A 2D array with each row representing the weights that * were learned by a particular perceptron in the network. * @return The proportion of examples from the test set that were * classified correctly. */ public static double testNetwork(double[][] kernelSet, double variance, double[][] weights, double[][] testSet){ //Repeat for each example in the test set // calculate the perceptron input for the test example // classify that input // compare the classification with the value of the last element in the // test example, which is it's correct classification. // count it if it was correct //Replace the line below to return the proportion of correct classifications. return 0; } }