• 沒有找到結果。

7.2 Data Analysis Code

7.2.2 Analysis Notebooks

This section contains three notebooks:

1. AHE Data Analysis

2. AMR-USMR Data Analysis 3. Pulse Switching Data Analysis

AHE

January 16, 2020

1 AHE DATA ANALYSIS

This notebook finds the coercive field values for AHE measurements and will perform loopshift analysis to find the linear fit of average coercivity vs applied current. Further input of proper device characteristics will provide estimations of effective field per current density and the spin hall angle.

Select file input and device parameters below [ ]: # user inputs

dir_path = r'C:\Users\nqmur\OneDrive\Desktop\Data_Analysis\AHE_Loopshift' #␣

,→path to directory file_type = '_200'

normalize_data = True # change to false to see regular y data, default is true

# Device characteristics

w = 10e-6 # device width (meters)

d = 4e-9 # thickness of spin Hall material (meters)

t = 1.4e-9 # thickness of magnetic layer - dead layer (meters) M = 1500 * 1000 # saturization magnetization (A/m)

rho_FM = 40 # resistivity of magnetic layer (uOhm-cm) rho_HM = 300 # resistivity of spin Hall material (uOhm-cm)

[ ]: import pandas as pd import glob

import os import math

import numpy as np

from datetime import datetime import seaborn as sns

import matplotlib.pyplot as plt from IPython.display import display from scipy import optimize

from AnalysisFunctions import drop_regions, find_zeros, import_datasets

all_files = glob.glob(os.path.join(dir_path, '*'+file_type+'*.csv')) # use os.

,→path.join to make os independent

1

[all_files.remove(x) for x in all_files if 'results.csv' in x] # ignore results␣

,→file

if len(all_files) != 0:

full_df = import_datasets(all_files, normalize_data) # if True, data is␣

,→automatically normalized display(full_df.head()) else:

print(f'No csv files found in {dir_path} with the format: {file_type}!')

SELECT DATAFRAME AND GRAPH PARAMETERS: x_column is the column used for x values for graphing and data analysis y_column is the column used for data analysis and graphing hue_column is the column used for seperating the data sets into individual line/scatter plots graph_column is the column that a set of hue_column values is grouped by

The following cell will provide a list of the unique values found in the hue and graph column.

[ ]: x_column = 'Field(Oe)' # plot x data values

y_column = 'Normalized Resistance(Ohm)' # plot y data values

hue_column = 'Applied current (mA)' # column to determine how lines should be␣

,→colored

graph_column = 'Applied in-plane field (Oe)' # seperates data by this column to␣

,→graph plots (multi-hued)

full_df[hue_column] = full_df[hue_column].apply(lambda x: round(x, 2)) # round␣

,→to two decimal places

full_df[graph_column] = full_df[graph_column].apply(lambda x: round(x, 2))

# sorted lists of unique values

hue_column_list = np.sort(full_df[hue_column].unique()) graph_column_list = np.sort(full_df[graph_column].unique())

# print values so they are easy to paste into the ignore function print(hue_column, 'list values in the dataframe: ' + np.

,→array2string(hue_column_list, separator=','))

print(graph_column, 'list values in the dataframe: ' + np.

,→array2string(graph_column_list, separator=',')) [ ]: # plot data

sns.set_style('whitegrid')

sns.set_palette('bright', len(hue_column_list))

f = sns.FacetGrid(full_df, hue=hue_column, col=graph_column, height=7,␣

,→despine=False)

f.map(plt.plot, x_column, y_column).add_legend()

IGNORING DATA Should the user wish to ignore any datasets in the following analysis, including the graph_column value : hue_column value in the ignore_dict in the following cell

2

will drop the said data from the analysis. The graph column should be the key string and the hue_column values should be in a list as ints or floats. To ignore an graph_column value, use hue_column_list to ignore all data for said value.

[ ]: # fill in dictionary with values to be ignored in the subsequent data processing ignore_dict = {

# use format of dictionary key == graph_column value with values ==␣

,→list(hue_column values to ignore) '400.0': [-0.7, -0.3, 0.5], }

# make copy so original import does not need to be repeated cleaned_df = full_df.copy()

# drop the areas from the full dataframe specified in the ignore_dict cleaned_df, ig_df, not_found = drop_regions(

cleaned_df, ignore_dict, graph_column_list, graph_column, hue_column)

# update the hue and graph columns

hue_column_list = np.sort(cleaned_df[hue_column].unique()) graph_column_list = np.sort(cleaned_df[graph_column].unique())

for string in not_found:

print(string)

# if there was ignored data, plot said data if type(ig_df) == pd.DataFrame:

f = sns.FacetGrid(ig_df, hue=hue_column,

col=graph_column, height=7, despine=False) f.map(plt.plot, x_column, y_column).add_legend()

DATA ANALYSIS: AHE Data analysis checks for the coercivity values closest to the middle of the loop with the lowest index. By changing the value of check_left_right the user can change how many points before and after a zero intercept should be of the same sign. Loops that have more than two zeros will automatically be flagged and the user will be prompted whether or not to ignore those loops.

[ ]: # number of datapoints to compare check_left_right = 1

# building the df to the proper size optimizes speed, will be larger than␣

,→needed if data is ignored

coercivity_df = (pd.DataFrame(index=np.arange(len(graph_column_list) *␣

,→len(hue_column_list)),

graph_column], dtype='float')

)

# go through cleaned_df and find coercivity values (x data) and save in␣

,→coercivity_df len_index = 0 multi_zero = {}

for g in graph_column_list:

# h_list is the specific hue values for each graph value, i.e. skips all␣

,→values that are ignored h_list = np.sort(

cleaned_df[(cleaned_df[graph_column] == g)][hue_column].unique()) for h in h_list:

# from dataframe of of specific graph_column and hue_column values pass␣

,→xy values as a numpy array

z, flag = find_zeros(cleaned_df[(cleaned_df[graph_column] == g) &␣

,→(cleaned_df[hue_column] == h)]

.loc[:, [x_column, y_column]].to_numpy(), check_left_right)

if flag == 'True':

if str(g) in multi_zero:

multi_zero[str(g)].append(h) else:

multi_zero[str(g)] = [h]

if len(z) != 2:

print(

f'Coercivity values not found for dataset {hue_column} {h} with␣

,→{graph_column} {g}') else:

coercivity_df.loc[len_index] = z + \ [(z[0] + z[1]) / 2, 0.0, flag, h, g]

len_index += 1

if len(multi_zero) != 0:

for key, value in multi_zero.items():

print(f'Multiple zeros found for loop: {key} {value}')

q = input('Do you wish to ingore data with multiple zeros? (y/n)') if q == 'Y' or q == 'y':

cleaned_df, ig, nf = drop_regions(

cleaned_df, multi_zero, graph_column_list, graph_column, hue_column) coercivity_df, ig, nf = drop_regions(

coercivity_df, multi_zero, graph_column_list, graph_column,␣

,→hue_column)

hue_column_list = np.sort(cleaned_df[hue_column].unique())

4

graph_column_list = np.sort(cleaned_df[graph_column].unique()) else:

pass else:

print('Only two zeros detected per loop')

coercivity_df.head()

[ ]: # plot data with found points

f = sns.FacetGrid(cleaned_df, hue=hue_column,

col=graph_column, height=7, despine=False) f.map(plt.plot, x_column, y_column, zorder=0).add_legend() for index, ax in enumerate(f.axes[0]):

ax.scatter(coercivity_df[(coercivity_df[graph_column] ==␣

AHE RESULTS Slope, intercept, Effective Field per Current and Spin Hall Angle are found for each unique value of the graph_column and stored in a DataFrame. Regardless of the device specific parameters determined by the user in the first cell, the fitting slope and intercept for each graph should be accurate to the dataset. The user has the option to ignore any specific data when saving the DataFrame in the last cell

[ ]: # dataframe containing data that will be saved to final output results_df = (pd.DataFrame(index=np.arange(len(graph_column_list)),

columns=['Fitting Slope', 'Fitting Intercept',

'Effective Field Per Current (Oe/A*m^-2)', 'Spin Hall Angle',

graph_column]) )

# use scipy fitting useing non-linear least squares fit

def linear_test_function(x, m, b):

return (m * x) + b

5

data_index = 0

for g in graph_column_list:

try:

params, params_covariance = (optimize.curve_fit(linear_test_function,

,→# function to test

coercivity_df[(

,→coercivity_df[graph_column] == g)][hue_column].to_numpy(), # x values

,→coercivity_df[(coercivity_df[graph_column] == g)]['Average Coercivity'].

,→to_numpy()) # y values

)

# Current distribution ratio

ratio = (rho_FM * d) / (rho_FM * d + rho_HM * t)

# Calculations of switching currents/SOT efficiency/etc.

HperJ = (params[0] * 10e11 * (1 / 1000) * (w * d)) / \ ratio # H/J in Oe/A*m^-2 * 10e11

she = (2 * M * t * w * d * params[0]) / \ (10 * 6.6e-16) * (2 / math.pi) / ratio

results_df.loc[data_index] = [params[0], params[1], HperJ, she, g]

data_index += 1 except:

print(f"An error occured with {g} {graph_column}") results_df.head()

[ ]: # plot data with found coercivities and fitting lines

f = sns.FacetGrid(coercivity_df, col=graph_column, height=7, despine=False) f.map(plt.scatter, hue_column, 'Average Coercivity')

for index, ax in enumerate(f.axes[0]):

ax.scatter(coercivity_df[(coercivity_df[graph_column] ==␣

,→graph_column_list[index])].loc[:, hue_column].to_numpy(), coercivity_df[(coercivity_df[graph_column] ==␣

,→graph_column_list[index])

].loc[:, 'Positive X Coercivity'].to_numpy(), color='green')

ax.scatter(coercivity_df[(coercivity_df[graph_column] ==␣

,→graph_column_list[index])].loc[:, hue_column].to_numpy(), coercivity_df[(coercivity_df[graph_column] ==␣

,→graph_column_list[index])

].loc[:, 'Negative X Coercivity'].to_numpy(), color='orange')

*[

results_df[(results_df[graph_column] ==␣

,→graph_column_list[index])

]['Fitting Slope'].to_numpy()[0], # slope per graph␣

,→value

results_df[(results_df[graph_column] ==␣

,→graph_column_list[index])

]['Fitting Intercept'].to_numpy()[0] # intercept␣

,→per graph value ]))

[ ]: filename = 'Loopshift'

timestamp = datetime.now().strftime('%Y-%m-%d-%H-%M') save_ignore = [

# list of values to ignore go here

# 100,

# -1000, -1200 ]

for n in save_ignore:

try:

results_df = results_df.drop(

results_df[(results_df[graph_column] == n)].index, inplace=True) print(f'Dropped value {n} from the results dataframe')

except:

print(f'Failed to drop value {n}!')

try:

results_df.to_csv(os.path.join(dir_path, file_type + filename + timestamp + 'results.csv'),␣

,→encoding='utf-8', index=False)

print(os.path.join(dir_path, file_type + filename +

timestamp + 'results.csv') + ' saved successfully') except:

print('Failed to save results.') [ ]:

7

AMR-USMR

January 16, 2020

1 AMR and USMR DATA ANALYSIS

This notebook finds peaks and difference between left and right sides of the measurement data and will perform linear fitting on the results.

Select file input parameters below [ ]: # user inputs

dir_path = r'C:\Users\nqmur\OneDrive\Desktop\Data_Analysis\AMR_Data' # path to␣

,→directory file_type = ''

normalize_data = True # change to false to see regular y data, default is true

[ ]: import pandas as pd import glob

import os import math

from datetime import datetime import numpy as np

import seaborn as sns

import matplotlib.pyplot as plt from IPython.display import display from scipy import optimize

from AnalysisFunctions import drop_regions, import_datasets, find_peaks,␣

,→find_resistance_change

all_files = glob.glob(os.path.join(dir_path, '*'+file_type+'*.csv')) # use os.

,→path.join to make os independent

[all_files.remove(x) for x in all_files if 'results.csv' in x] # ignore results␣

,→file

if len(all_files) != 0:

full_df = import_datasets(all_files, normalize_data, norm_to_zero=True) #␣

,→if True, data is automatically normalized display(full_df.head())

else:

print(f'No csv files found in {dir_path} with the format: {file_type}!')

1

full_df.rename({'Hx': 'Hx New'}, inplace=True)

SELECT DATAFRAME AND GRAPH PARAMETERS: x_column is the column used for x values for graphing and data analysis y_column is the column used for data analysis, can also be selected for graphing y_normed_column, if normalize_data is set to true, these values will be the default plot y value hue_column is the column used for seperating the data sets into individual line/scatter plots graph_column is the column that a set of hue_column values is grouped by The following cell will provide a list of the unique values found in the hue and graph column.

[ ]: x_column = 'Field(Oe)' y_column = 'Resistance(Ohm)'

y_normed_column = 'Normalized Resistance(Ohm)'

hue_column = 'Applied current (mA)' # column to determine how lines should be␣

,→colored

graph_column = 'Applied in-plane field (Oe)' # seperates data by this column to␣

,→graph plots (multi-hued)

full_df[hue_column] = full_df[hue_column].apply(lambda x: round(x, 2)) # round␣

,→to two decimal places

full_df[graph_column] = full_df[graph_column].apply(lambda x: round(x, 2))

# sorted lists of unique values

hue_column_list = np.sort(full_df[hue_column].unique()) graph_column_list = np.sort(full_df[graph_column].unique())

print(hue_column, 'list values in the dataframe: ' + np.

,→array2string(hue_column_list, separator=','))

print(graph_column, 'list values in the dataframe: ' + np.

,→array2string(graph_column_list, separator=',')) [ ]: # plot data

sns.set_style('whitegrid')

sns.set_palette('bright', len(hue_column_list))

f = sns.FacetGrid(full_df, hue=hue_column, col=graph_column, height=7,␣

,→despine=False)

f.map(plt.plot, x_column, y_normed_column).add_legend()

IGNORING DATA Should the user wish to ignore any datasets in the following analysis, including the graph_column value : hue_column value in the ignore_dict in the following cell will drop the said data from the analysis. The graph column should be the key string and the hue_column values should be in a list as ints or floats. To ignore an graph_column value, use hue_column_list to ignore all data for said value.

[ ]: # fill in dictionary with values to be ignored in the subsequent data processing ignore_dict = {

2

# use format of dictionary key == graph_column value with values ==␣

,→list(hue_column values to ignore) '250.0': [-0.3],

}

cleaned_df = full_df.copy() # make copy so original import does not need to be␣

,→repeated

# drop the areas from the full dataframe specified in the ignore_dict cleaned_df, ig_df, not_found = drop_regions(cleaned_df, ignore_dict,␣

,→graph_column_list, graph_column, hue_column)

# update list unique values

hue_column_list = np.sort(cleaned_df[hue_column].unique()) graph_column_list = np.sort(cleaned_df[graph_column].unique())

for string in not_found:

print(string)

# if there was ignored data, plot said data if type(ig_df) == pd.DataFrame:

f = sns.FacetGrid(ig_df, hue=hue_column, col=graph_column, height=7,␣

,→despine=False)

f.map(plt.plot, x_column, y_normed_column).add_legend()

DATA ANALYSIS: AMR data analysis consists of parts, the first being finding the average x value between the two peaks and the second is checking the resistance change between the left and right half of a dataset corresponding to a value in the hue_column_list. For USMR datasets just ignore the peak data results.

When checking the change in resistance, use the data_percent variable to select the percentage (as an int) of the data nearest min/max x values to compare. Note that due to there being two y points for each x value, selecting 15 percent will result in a total of 60 percent of the data being compared.

[ ]: data_percent = 15 # percent of data to check near min and max values

# building the df to the proper size optimizes speed, will be larger than␣

,→needed if data is ignored

# peaks aren't always positive and negative,

coercivity_df = (pd.DataFrame(index = np.arange(len(graph_column_list) *␣

,→len(hue_column_list)),

graph_column], dtype='float')

)

# go through cleaned_df and find coercivity values (x data) and save in␣

,→coercivity_df len_index = 0

for g in graph_column_list:

# h_list is the specific hue values for each graph value, i.e. skips all␣

,→values that are ignored

h_list = np.sort(cleaned_df[(cleaned_df[graph_column] == g)][hue_column].

,→unique())

for h in h_list:

# pass in x and y and normalized y values as array for specific loop␣

,→(graph and hue)

vals = find_peaks(cleaned_df[(cleaned_df[graph_column] == g) &

(cleaned_df[hue_column] == h)].loc[:, [x_column,␣

,→y_column, y_normed_column]].to_numpy())

delta_r = find_resistance_change(cleaned_df[(cleaned_df[graph_column]␣

,→== g) &

(cleaned_df[hue_column] == h)].loc[:,␣

,→[x_column, y_column]].to_numpy(), data_percent) if len(vals) != 7: # list of 8 items if normal

print(f'Issue with dataset {hue_column} {h} with {graph_column}␣

,→{g}') else:

coercivity_df.loc[len_index] = vals + [delta_r, h, g]

len_index += 1

coercivity_df.head()

[ ]: # plot data with found points

f = sns.FacetGrid(cleaned_df, hue=hue_column, col=graph_column, height=7) f.map(plt.plot, cleaned_df.columns.values[0], cleaned_df.columns.values[2],␣

,→zorder=0).add_legend()

for index, ax in enumerate(f.axes[0]):

ax.scatter(coercivity_df[(coercivity_df[graph_column] ==␣

c='black', marker='D', zorder=1)

SAVE USMR DATA Selecting a filename and any specific datasets to ignore the following cell will save a csv with the change in resistance, hue_column and graph_column values. AMR data can be saved later on.

[ ]: filename = ''

timestamp = datetime.now().strftime('%Y-%m-%d-%H-%M') save_ignore = {

#'-1200': [0.3]

}

coercivity_df, ig_df, not_found = drop_regions(coercivity_df, save_ignore,␣

,→graph_column_list, graph_column, hue_column)

for string in not_found:

print(string)

try:

coercivity_df.drop(columns=['Right Peak X', 'Left Peak X', 'Average X',␣

,→'Right Peak Y', 'Left Peak Y', 'Normed Right Peak',

'Normed Left Peak']).to_csv(os.path.

,→join(dir_path, file_type + 'USMR-results.csv'),

encoding='utf-8',␣

,→index=False)

print(os.path.join(dir_path, file_type + filename + timestamp + 'results.

,→csv') + ' saved successfully') except:

print('Failed to save results.')

AMR RESULTS Using scipy optimization for a linear fitting, the average x values and the fitting line for average x vs hue_column values are plotted. These parameters are stored in a DataFrame which can be saved in the last cell of this notebook.

[ ]: # dataframe containing data that will be saved to final output

results_df = (pd.DataFrame(index = np.arange(len(graph_column_list)), columns=['Fitting Slope',

'Fitting Intercept', graph_column]) )

# use scipy fitting useing non-linear least squares fit def linear_test_function(x, m, b):

return (m * x) + b

5

data_index = 0

for g in graph_column_list:

try:

params, params_covariance = (optimize.curve_fit(linear_test_function, #␣

,→function to test

coercivity_df[(coercivity_df[graph_column] ==␣

,→g)][hue_column].to_numpy(), # x values

coercivity_df[(coercivity_df[graph_column] ==␣

,→g)]['Average X'].to_numpy()) # y values )

results_df.loc[data_index] = [params[0], params[1], g]

data_index += 1 except:

print(f"An error occured with {g} {graph_column}")

results_df.head()

[ ]: # plot data with found coercivities and fitting lines

f = sns.FacetGrid(coercivity_df, col=graph_column, height=7) f.map(plt.scatter, hue_column, 'Average X')

for index, ax in enumerate(f.axes[0]):

ax.set_ylabel('Field Average (Oe)')

ax.plot(coercivity_df[(coercivity_df[graph_column] ==␣

,→graph_column_list[index])].loc[:, hue_column],

linear_test_function(coercivity_df[(coercivity_df[graph_column]␣

,→== graph_column_list[index])].loc[:, hue_column].to_numpy(),

*[

results_df[(results_df[graph_column] ==␣

,→graph_column_list[index])]['Fitting Slope'][0], # slope per graph value results_df[(results_df[graph_column] ==␣

,→graph_column_list[index])]['Fitting Intercept'][0] # intercept per graph␣

,→value

]))

[ ]: filename = ''

timestamp = datetime.now().strftime('%Y-%m-%d-%H-%M') save_ignore = [

# list of values to ignore go here

# 100,

# -1000, ]

for n in save_ignore:

try:

6

results_df = results_df.drop(results_df[(results_df[graph_column] ==␣

,→n)].index, inplace = True)

print(f'Dropped value {n} from the results dataframe') except:

print(f'Failed to drop value {n}!')

try:

results_df.to_csv(os.path.join(dir_path, file_type + 'AMR-results.csv'),␣

,→encoding='utf-8', index=False)

print(os.path.join(dir_path, file_type + filename + timestamp + 'results.

,→csv') + ' saved successfully') except:

print('Failed to save results.')

7

Pulse_Switching

January 16, 2020

1 PULSE SWITCHING ANALYSIS

This notebook will find the critical current for current switching loops and will also perform thermal stability fitting. Estimations of full switching are done by comparison to the user selected threshold.

Select file input and threshold below [ ]: # user inputs

dir_path = r'Pulse_Switching_Data' # path to directory file_type = ''

normalize_data = True # change to false to see regular y data, default is true

# Device characteristics

w = 10e-6 # device width (meters)

tau = 10 ** -9 # thermal stability factor threshold = 0.3 # resistance threshold

[ ]: import pandas as pd import glob

import os import math

from datetime import datetime import numpy as np

import seaborn as sns

import matplotlib.pyplot as plt from IPython.display import display from scipy import optimize

from AnalysisFunctions import drop_regions, find_zeros, import_datasets,␣

,→find_resistance_change

all_files = glob.glob(os.path.join(dir_path, '*'+file_type+'*.csv')) # use os.

,→path.join to make os independent

[all_files.remove(x) for x in all_files if 'results' in x] # ignore results file

if len(all_files) != 0:

full_df = import_datasets(all_files, normalize_data) # if True, data is␣

,→automatically normalized display(full_df.head())

1

else:

print(f'No csv files found in {dir_path} with the format: {file_type}!')

SELECT DATAFRAME AND GRAPH PARAMETERS: x_column is the column used for x values for graphing and data analysis y_column is the column used for data analysis y_normed_column, if normalize_data is set to true, these values will be the default plot y value hue_column is the column used for seperating the data sets into individual line/scatter plots graph_column is the column that a set of hue_column values is grouped by

The following cell will provide a list of the unique values found in the hue and graph column.

[ ]: x_column = 'Current(mA)' y_column = 'Resistance(Ohm)'

y_normed_column = 'Normalized Resistance(Ohm)'

hue_column = 'Pulse width s' # column to determine how lines should be colored graph_column = 'Applied in-plane field (Oe)' # seperates data by this column to␣

,→graph plots (multi-hued)

full_df[hue_column] = full_df[hue_column].apply(lambda x: round(x, 2)) # round␣

,→to two decimal places

full_df[graph_column] = full_df[graph_column].apply(lambda x: round(x, 2))

# sorted lists of unique values

hue_column_list = np.sort(full_df[hue_column].unique()) graph_column_list = np.sort(full_df[graph_column].unique())

print(hue_column, 'list values in the dataframe: ' + np.

,→array2string(hue_column_list, separator=','))

print(graph_column, 'list values in the dataframe: ' + np.

,→array2string(graph_column_list, separator=',')) [ ]: # plot data

sns.set_style('whitegrid')

sns.set_palette('bright', len(hue_column_list))

f = sns.FacetGrid(full_df, hue=hue_column, col=graph_column, height=7,␣

,→despine=False)

f.map(plt.plot, x_column, y_normed_column).add_legend()

IGNORING DATA Should the user wish to ignore any datasets in the following analysis, including the graph_column value : hue_column value in the ignore_dict in the following cell will drop the said data from the analysis. The graph column should be the key string and the hue_column values should be in a list as ints or floats. To ignore an graph_column value, use hue_column_list to ignore all data for said value.

[ ]: # fill in dictionary with values to be ignored in the subsequent data processing ignore_dict = {

2

# use format of dictionary key == graph_column value with values ==␣

,→list(hue_column values to ignore) '100.0': [0.3],

}

cleaned_df = full_df.copy() # make copy so original import does not need to be␣

,→repeated

# drop the areas from the full dataframe specified in the ignore_dict cleaned_df, ig_df, not_found = drop_regions(cleaned_df, ignore_dict,␣

,→graph_column_list, graph_column, hue_column)

# update list unique values

hue_column_list = np.sort(cleaned_df[hue_column].unique()) graph_column_list = np.sort(cleaned_df[graph_column].unique())

for string in not_found:

print(string)

# if there was ignored data, plot said data if type(ig_df) == pd.DataFrame:

f = sns.FacetGrid(ig_df, hue=hue_column, col=graph_column, height=7,␣

,→despine=False)

f.map(plt.plot, x_column, y_column).add_legend()

DATA ANALYSIS: Pulse Switching Data analysis checks for the critical current values closest to the middle of the loop with the lowest index and will also check the total resistance change and compare it to the threshold value determined by the user in the first cell.

By changing the value of check_left_right the user can change how many points before and after a zero intercept should be of the same sign. Loops that have more than two zeros will automatically be flagged and the user will be prompted whether or not to ignore those loops.

When checking the change in resistance, use the data_percent variable to select the percentage (as an int) of the data nearest min/max x values to compare.

[ ]: check_left_right = 1 # number of datapoints to compare

data_percent = 15 # percent of data to check near min and max values

# building the df to the proper size optimizes speed, will be larger than␣

,→needed if data is ignored

,→needed if data is ignored

相關文件