TF āϜāĻžāϞ āϏāĻŽāĻˇā§āϟāĻŋāĻ—āϤ āĻĢāĻžāĻ‚āĻļāύ āĻŽāĻĄā§‡āϞ

TensorFlow.org āĻ āĻĻ⧇āϖ⧁āύ Google Colab-āĻ āϚāĻžāϞāĻžāύ GitHub-āĻ āĻ‰ā§ŽāϏ āĻĻ⧇āϖ⧁āύ āύ⧋āϟāĻŦ⧁āĻ• āĻĄāĻžāωāύāϞ⧋āĻĄ āĻ•āϰ⧁āύ

āĻ“āĻ­āĻžāϰāĻ­āĻŋāω

TfL premade āĻŽā§‹āϟ āĻĢāĻžāĻ‚āĻļāύ āĻŽāĻĄā§‡āϞ āĻĻā§āϰ⧁āϤ āĻāĻŦāĻ‚ āϏāĻšāϜ āωāĻĒāĻžāϝāĻŧ⧇ TfL āĻ—āĻĄāĻŧ⧇ āϤ⧁āϞāϤ⧇ āĻšāϝāĻŧ tf.keras.model āϜāϟāĻŋāϞ āĻ…ā§āϝāĻžāĻ—ā§āϰāĻŋāϗ⧇āĻļāύ āĻĢāĻžāĻ‚āĻļāύ āĻļ⧇āĻ–āĻžāϰ āϜāĻ¨ā§āϝ⧇ āĻĻ⧃āĻˇā§āϟāĻžāĻ¨ā§āϤāĨ¤ āĻāχ āύāĻŋāĻ°ā§āĻĻ⧇āĻļāĻŋāĻ•āĻžāϟāĻŋ āĻāĻ•āϟāĻŋ TFL āĻĒā§āϰāĻŋāĻŽā§‡āĻĄ āĻāĻ—ā§āϰāĻŋāϗ⧇āϟ āĻĢāĻžāĻ‚āĻļāύ āĻŽāĻĄā§‡āϞ āϤ⧈āϰāĻŋ āĻ•āϰāϤ⧇ āĻāĻŦāĻ‚ āĻāϟāĻŋāϕ⧇ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāĻŖ/āĻĒāϰ⧀āĻ•ā§āώāĻž āĻ•āϰāĻžāϰ āϜāĻ¨ā§āϝ āĻĒā§āϰāϝāĻŧā§‹āϜāύ⧀āϝāĻŧ āĻĒāĻĻāĻ•ā§āώ⧇āĻĒāϗ⧁āϞāĻŋāϰ āϰ⧂āĻĒāϰ⧇āĻ–āĻž āĻĻ⧇āϝāĻŧāĨ¤

āϏ⧇āϟāφāĻĒ

āϟāĻŋāĻāĻĢ āĻ˛ā§āϝāĻžāϟāĻŋāϏ āĻĒā§āϝāĻžāϕ⧇āϜ āχāύāĻ¸ā§āϟāϞ āĻ•āϰāĻž āĻšāĻšā§āϛ⧇:

pip install -q tensorflow-lattice pydot

āĻĒā§āϰāϝāĻŧā§‹āϜāύ⧀āϝāĻŧ āĻĒā§āϝāĻžāϕ⧇āϜ āφāĻŽāĻĻāĻžāύāĻŋ āĻ•āϰāĻž āĻšāĻšā§āϛ⧇:

import tensorflow as tf

import collections
import logging
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)

āĻĒāĻžāϜāϞāϏ āĻĄā§‡āϟāĻžāϏ⧇āϟ āĻĄāĻžāωāύāϞ⧋āĻĄ āĻ•āϰāĻž āĻšāĻšā§āϛ⧇:

train_dataframe = pd.read_csv(
    'https://raw.githubusercontent.com/wbakst/puzzles_data/master/train.csv')
train_dataframe.head()
test_dataframe = pd.read_csv(
    'https://raw.githubusercontent.com/wbakst/puzzles_data/master/test.csv')
test_dataframe.head()

āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻāĻŦāĻ‚ āϞ⧇āĻŦ⧇āϞ āύāĻŋāĻˇā§āĻ•āĻžāĻļāύ āĻāĻŦāĻ‚ āϰ⧂āĻĒāĻžāĻ¨ā§āϤāϰ

# Features:
# - star_rating       rating out of 5 stars (1-5)
# - word_count        number of words in the review
# - is_amazon         1 = reviewed on amazon; 0 = reviewed on artifact website
# - includes_photo    if the review includes a photo of the puzzle
# - num_helpful       number of people that found this review helpful
# - num_reviews       total number of reviews for this puzzle (we construct)
#
# This ordering of feature names will be the exact same order that we construct
# our model to expect.
feature_names = [
    'star_rating', 'word_count', 'is_amazon', 'includes_photo', 'num_helpful',
    'num_reviews'
]
def extract_features(dataframe, label_name):
  # First we extract flattened features.
  flattened_features = {
      feature_name: dataframe[feature_name].values.astype(float)
      for feature_name in feature_names[:-1]
  }

  # Construct mapping from puzzle name to feature.
  star_rating = collections.defaultdict(list)
  word_count = collections.defaultdict(list)
  is_amazon = collections.defaultdict(list)
  includes_photo = collections.defaultdict(list)
  num_helpful = collections.defaultdict(list)
  labels = {}

  # Extract each review.
  for i in range(len(dataframe)):
    row = dataframe.iloc[i]
    puzzle_name = row['puzzle_name']
    star_rating[puzzle_name].append(float(row['star_rating']))
    word_count[puzzle_name].append(float(row['word_count']))
    is_amazon[puzzle_name].append(float(row['is_amazon']))
    includes_photo[puzzle_name].append(float(row['includes_photo']))
    num_helpful[puzzle_name].append(float(row['num_helpful']))
    labels[puzzle_name] = float(row[label_name])

  # Organize data into list of list of features.
  names = list(star_rating.keys())
  star_rating = [star_rating[name] for name in names]
  word_count = [word_count[name] for name in names]
  is_amazon = [is_amazon[name] for name in names]
  includes_photo = [includes_photo[name] for name in names]
  num_helpful = [num_helpful[name] for name in names]
  num_reviews = [[len(ratings)] * len(ratings) for ratings in star_rating]
  labels = [labels[name] for name in names]

  # Flatten num_reviews
  flattened_features['num_reviews'] = [len(reviews) for reviews in num_reviews]

  # Convert data into ragged tensors.
  star_rating = tf.ragged.constant(star_rating)
  word_count = tf.ragged.constant(word_count)
  is_amazon = tf.ragged.constant(is_amazon)
  includes_photo = tf.ragged.constant(includes_photo)
  num_helpful = tf.ragged.constant(num_helpful)
  num_reviews = tf.ragged.constant(num_reviews)
  labels = tf.constant(labels)

  # Now we can return our extracted data.
  return (star_rating, word_count, is_amazon, includes_photo, num_helpful,
          num_reviews), labels, flattened_features
train_xs, train_ys, flattened_features = extract_features(train_dataframe, 'Sales12-18MonthsAgo')
test_xs, test_ys, _ = extract_features(test_dataframe, 'SalesLastSixMonths')
# Let's define our label minimum and maximum.
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))
min_label, max_label = float(np.min(train_ys)), float(np.max(train_ys))

āĻāχ āύāĻŋāĻ°ā§āĻĻ⧇āĻļāĻŋāĻ•āĻžāϝāĻŧ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāϪ⧇āϰ āϜāĻ¨ā§āϝ āĻŦā§āϝāĻŦāĻšā§ƒāϤ āĻĄāĻŋāĻĢāĻ˛ā§āϟ āĻŽāĻžāύ āϏ⧇āϟ āĻ•āϰāĻž:

LEARNING_RATE = 0.1
BATCH_SIZE = 128
NUM_EPOCHS = 500
MIDDLE_DIM = 3
MIDDLE_LATTICE_SIZE = 2
MIDDLE_KEYPOINTS = 16
OUTPUT_KEYPOINTS = 8

āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ

āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻ•ā§āϰāĻŽāĻžāĻ™ā§āĻ•āύ āĻāĻŦāĻ‚ āĻĒā§āϰāϤāĻŋ-āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ⧇āĻļāύ⧇āϰ āĻŦā§āϝāĻŦāĻšāĻžāϰ āύāĻŋāĻ°ā§āϧāĻžāϰāĻŖ āĻ•āϰāĻž āĻšāϝāĻŧ tfl.configs.FeatureConfig āĨ¤ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ⧇āĻļāύ⧇āϰ monotonicity āϏ⧀āĻŽāĻžāĻŦāĻĻā§āϧāϤāĻž, āĻĒā§āϰāϤāĻŋ-āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āύāĻŋāϝāĻŧāĻŽāĻŋāϤāĻ•āϰāĻŖ (āĻĻ⧇āϖ⧁āύ āĻ…āĻ¨ā§āϤāĻ°ā§āϭ⧁āĻ•ā§āϤ tfl.configs.RegularizerConfig ), āĻāĻŦāĻ‚ āϜāĻžāĻĢāϰāĻŋ āĻŽāĻĄā§‡āϞ⧇āϰ āϜāĻ¨ā§āϝ āϜāĻžāĻĢāϰāĻŋ āĻŽāĻžāĻĒāĨ¤

āĻŽāύ⧇ āϰāĻžāĻ–āĻŦ⧇āύ āϝ⧇ āφāĻŽāϰāĻž āφāĻŽāĻžāĻĻ⧇āϰ āĻŽāĻĄā§‡āϞ āϚāĻŋāύāϤ⧇ āϚāĻžāχ āĻāĻŽāύ āϝ⧇ āϕ⧋āύāĻ“ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ⧇āϰ āϜāĻ¨ā§āϝ āφāĻŽāĻžāĻĻ⧇āϰ āĻ…āĻŦāĻļā§āϝāχ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ āϏāĻŽā§āĻĒā§‚āĻ°ā§āĻŖāϰ⧂āĻĒ⧇ āύāĻŋāĻ°ā§āĻĻāĻŋāĻˇā§āϟ āĻ•āϰāϤ⧇ āĻšāĻŦ⧇āĨ¤ āĻ…āĻ¨ā§āϝāĻĨāĻžāϝāĻŧ āĻŽāĻĄā§‡āϞāϟāĻŋāϰ āϜāĻžāύāĻžāϰ āϕ⧋āύ āωāĻĒāĻžāϝāĻŧ āĻĨāĻžāĻ•āĻŦ⧇ āύāĻž āϝ⧇ āĻāχ āϧāϰāύ⧇āϰ āĻāĻ•āϟāĻŋ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻŦāĻŋāĻĻā§āϝāĻŽāĻžāύāĨ¤ āĻāĻ•āĻ¤ā§āϰ⧀āĻ•āϰāĻŖ āĻŽāĻĄā§‡āϞ⧇āϰ āϜāĻ¨ā§āϝ, āĻāχ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝāϗ⧁āϞāĻŋ āĻ¸ā§āĻŦāϝāĻŧāĻ‚āĻ•ā§āϰāĻŋāϝāĻŧāĻ­āĻžāĻŦ⧇ āĻŦāĻŋāĻŦ⧇āϚāύāĻž āĻ•āϰāĻž āĻšāĻŦ⧇ āĻāĻŦāĻ‚ āϏāĻ āĻŋāĻ•āĻ­āĻžāĻŦ⧇ āĻ°ā§āϝāĻžāĻ—āĻĄ āĻšāĻŋāϏāĻžāĻŦ⧇ āĻĒāϰāĻŋāϚāĻžāϞāύāĻž āĻ•āϰāĻž āĻšāĻŦ⧇āĨ¤

āϕ⧋āϝāĻŧāĻžāĻ¨ā§āϟāĻžāχāϞ āĻ—āĻŖāύāĻž āĻ•āϰ⧁āύ

āϝāĻĻāĻŋāĻ“ āϜāĻ¨ā§āϝ āĻĄāĻŋāĻĢāĻ˛ā§āϟ āϏ⧇āϟāĻŋāĻ‚ pwl_calibration_input_keypoints āĻŽāĻ§ā§āϝ⧇ tfl.configs.FeatureConfig 'quantiles', premade āĻŽāĻĄā§‡āϞ⧇āϰ āϜāĻ¨ā§āϝ āφāĻŽāϰāĻž āĻŽā§āϝāĻžāύ⧁āϝāĻŧāĻžāϞāĻŋ āχāύāĻĒ⧁āϟ keypoints āϏāĻ‚āĻœā§āĻžāĻžāϝāĻŧāĻŋāϤ āĻ•āϰāϤ⧇ āĻšāĻŦ⧇āĨ¤ āĻāϟāĻŋ āĻ•āϰāĻžāϰ āϜāĻ¨ā§āϝ, āφāĻŽāϰāĻž āĻĒā§āϰāĻĨāĻŽā§‡ āϕ⧋āϝāĻŧāĻžāĻ¨ā§āϟāĻžāχāϞ āĻ—āĻŖāύāĻžāϰ āϜāĻ¨ā§āϝ āφāĻŽāĻžāĻĻ⧇āϰ āύāĻŋāϜāĻ¸ā§āĻŦ āϏāĻšāĻžāϝāĻŧāĻ• āĻĢāĻžāĻ‚āĻļāύ āϏāĻ‚āĻœā§āĻžāĻžāϝāĻŧāĻŋāϤ āĻ•āϰāĻŋāĨ¤

def compute_quantiles(features,
                      num_keypoints=10,
                      clip_min=None,
                      clip_max=None,
                      missing_value=None):
  # Clip min and max if desired.
  if clip_min is not None:
    features = np.maximum(features, clip_min)
    features = np.append(features, clip_min)
  if clip_max is not None:
    features = np.minimum(features, clip_max)
    features = np.append(features, clip_max)
  # Make features unique.
  unique_features = np.unique(features)
  # Remove missing values if specified.
  if missing_value is not None:
    unique_features = np.delete(unique_features,
                                np.where(unique_features == missing_value))
  # Compute and return quantiles over unique non-missing feature values.
  return np.quantile(
      unique_features,
      np.linspace(0., 1., num=num_keypoints),
      interpolation='nearest').astype(float)

āφāĻŽāĻžāĻĻ⧇āϰ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ āϏāĻ‚āĻœā§āĻžāĻžāϝāĻŧāĻŋāϤ āĻ•āϰāĻž

āĻāĻ–āύ āϝ⧇āĻšā§‡āϤ⧁ āφāĻŽāϰāĻž āφāĻŽāĻžāĻĻ⧇āϰ āϕ⧋āϝāĻŧāĻžāĻ¨ā§āϟāĻžāχāϞāϗ⧁āϞāĻŋ āĻ—āĻŖāύāĻž āĻ•āϰāϤ⧇ āĻĒāĻžāϰāĻŋ, āφāĻŽāϰāĻž āĻĒā§āϰāϤāĻŋāϟāĻŋ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ⧇āϰ āϜāĻ¨ā§āϝ āĻāĻ•āϟāĻŋ āĻŦ⧈āĻļāĻŋāĻˇā§āĻŸā§āϝ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ āϏāĻ‚āĻœā§āĻžāĻžāϝāĻŧāĻŋāϤ āĻ•āϰāĻŋ āϝāĻž āφāĻŽāϰāĻž āφāĻŽāĻžāĻĻ⧇āϰ āĻŽāĻĄā§‡āϞāϟāĻŋāϕ⧇ āχāύāĻĒ⧁āϟ āĻšāĻŋāϏāĻžāĻŦ⧇ āύāĻŋāϤ⧇ āϚāĻžāχāĨ¤

# Feature configs are used to specify how each feature is calibrated and used.
feature_configs = [
    tfl.configs.FeatureConfig(
        name='star_rating',
        lattice_size=2,
        monotonicity='increasing',
        pwl_calibration_num_keypoints=5,
        pwl_calibration_input_keypoints=compute_quantiles(
            flattened_features['star_rating'], num_keypoints=5),
    ),
    tfl.configs.FeatureConfig(
        name='word_count',
        lattice_size=2,
        monotonicity='increasing',
        pwl_calibration_num_keypoints=5,
        pwl_calibration_input_keypoints=compute_quantiles(
            flattened_features['word_count'], num_keypoints=5),
    ),
    tfl.configs.FeatureConfig(
        name='is_amazon',
        lattice_size=2,
        num_buckets=2,
    ),
    tfl.configs.FeatureConfig(
        name='includes_photo',
        lattice_size=2,
        num_buckets=2,
    ),
    tfl.configs.FeatureConfig(
        name='num_helpful',
        lattice_size=2,
        monotonicity='increasing',
        pwl_calibration_num_keypoints=5,
        pwl_calibration_input_keypoints=compute_quantiles(
            flattened_features['num_helpful'], num_keypoints=5),
        # Larger num_helpful indicating more trust in star_rating.
        reflects_trust_in=[
            tfl.configs.TrustConfig(
                feature_name="star_rating", trust_type="trapezoid"),
        ],
    ),
    tfl.configs.FeatureConfig(
        name='num_reviews',
        lattice_size=2,
        monotonicity='increasing',
        pwl_calibration_num_keypoints=5,
        pwl_calibration_input_keypoints=compute_quantiles(
            flattened_features['num_reviews'], num_keypoints=5),
    )
]

āϏāĻžāĻŽāĻ—ā§āϰāĻŋāĻ• āĻĢāĻžāĻ‚āĻļāύ āĻŽāĻĄā§‡āϞ

āĻāĻ•āϟāĻŋ TfL premade āĻĒā§āϰāϤāĻŋāϰ⧂āĻĒ āĻ—āĻ āύ āĻ•āϰ⧇, āĻĒā§āϰāĻĨāĻŽ āĻĨ⧇āϕ⧇ āĻāĻ•āϟāĻŋ āĻŽāĻĄā§‡āϞ āĻ•āύāĻĢāĻŋāĻ—āĻžāϰ⧇āĻļāύ āĻ—āĻ āύ āĻ•āϰāĻž tfl.configs āĨ¤ āĻāĻ•āϟāĻŋ āϏāĻŽāĻˇā§āϟāĻŋāĻ—āϤ āĻĢāĻžāĻ‚āĻļāύ āĻŽāĻĄā§‡āϞ āĻŦā§āϝāĻŦāĻšāĻžāϰ āĻ•āϰ⧇ āύāĻŋāĻ°ā§āĻŽāĻŋāϤ āĻšāϝāĻŧ tfl.configs.AggregateFunctionConfig āĨ¤ āĻāϟāĻŋ āϟ⧁āĻ•āϰ⧋ āϟ⧁āĻ•āϰ⧋-āϰ⧈āĻ–āĻŋāĻ• āĻāĻŦāĻ‚ āĻļā§āϰ⧇āĻŖā§€āĻŦāĻĻā§āϧ āĻ•ā§āϰāĻŽāĻžāĻ™ā§āĻ•āύ āĻĒā§āϰāϝ⧋āĻœā§āϝ, āϤāĻžāϰāĻĒāϰ⧇ āĻ°â€ā§āϝāĻžāĻ—āĻĄ āχāύāĻĒ⧁āĻŸā§‡āϰ āĻĒā§āϰāϤāĻŋāϟāĻŋ āĻŽāĻžāĻ¤ā§āϰāĻžāϝāĻŧ āĻāĻ•āϟāĻŋ āϜāĻžāϞāĻŋ āĻŽāĻĄā§‡āϞ āĻ…āύ⧁āϏāϰāĻŖ āĻ•āϰ⧇āĨ¤ āϤāĻžāϰāĻĒāϰ⧇ āĻāϟāĻŋ āĻĒā§āϰāϤāĻŋāϟāĻŋ āĻŽāĻžāĻ¤ā§āϰāĻžāϰ āϜāĻ¨ā§āϝ āφāωāϟāĻĒ⧁āĻŸā§‡āϰ āωāĻĒāϰ āĻāĻ•āϟāĻŋ āϏāĻŽāĻˇā§āϟāĻŋ āĻ¸ā§āϤāϰ āĻĒā§āϰāϝāĻŧā§‹āĻ— āĻ•āϰ⧇āĨ¤ āĻāϟāĻŋ āϤāĻžāϰāĻĒāϰ āĻāĻ•āϟāĻŋ āϐāĻšā§āĻ›āĻŋāĻ• āφāωāϟāĻĒ⧁āϟ āĻĒāĻŋāϏāĻ“āϝāĻŧāĻžāχāϜ-āϞāĻŋāύāĻŋāϝāĻŧāĻžāϰ āĻ•ā§āϰāĻŽāĻžāĻ™ā§āĻ•āύ āĻĻā§āĻŦāĻžāϰāĻž āĻ…āύ⧁āϏāϰāĻŖ āĻ•āϰāĻž āĻšāϝāĻŧāĨ¤

# Model config defines the model structure for the aggregate function model.
aggregate_function_model_config = tfl.configs.AggregateFunctionConfig(
    feature_configs=feature_configs,
    middle_dimension=MIDDLE_DIM,
    middle_lattice_size=MIDDLE_LATTICE_SIZE,
    middle_calibration=True,
    middle_calibration_num_keypoints=MIDDLE_KEYPOINTS,
    middle_monotonicity='increasing',
    output_min=min_label,
    output_max=max_label,
    output_calibration=True,
    output_calibration_num_keypoints=OUTPUT_KEYPOINTS,
    output_initialization=np.linspace(
        min_label, max_label, num=OUTPUT_KEYPOINTS))
# An AggregateFunction premade model constructed from the given model config.
aggregate_function_model = tfl.premade.AggregateFunction(
    aggregate_function_model_config)
# Let's plot our model.
tf.keras.utils.plot_model(
    aggregate_function_model, show_layer_names=False, rankdir='LR')

png

āĻĒā§āϰāϤāĻŋāϟāĻŋ āĻ…ā§āϝāĻžāĻ—ā§āϰāĻŋāϗ⧇āĻļāύ āϞ⧇āϝāĻŧāĻžāϰ⧇āϰ āφāωāϟāĻĒ⧁āϟ āĻšāϞ āϰāĻžāĻ— āĻ•āϰāĻž āχāύāĻĒ⧁āϟāϗ⧁āϞāĻŋāϰ āωāĻĒāϰ āĻāĻ•āϟāĻŋ āĻ•ā§āϝāĻžāϞāĻŋāĻŦā§āϰ⧇āĻŸā§‡āĻĄ āϜāĻžāϞāĻŋāϰ āĻ—āĻĄāĻŧ āφāωāϟāĻĒ⧁āϟāĨ¤ āĻāĻ–āĻžāύ⧇ āĻĒā§āϰāĻĨāĻŽ āĻāĻ•āĻ¤ā§āϰāĻŋāϤāĻ•āϰāĻŖ āĻ¸ā§āϤāϰ⧇āϰ āĻ­āĻŋāϤāϰ⧇ āĻŦā§āϝāĻŦāĻšā§ƒāϤ āĻŽāĻĄā§‡āϞāϟāĻŋ āϰāϝāĻŧ⧇āϛ⧇:

aggregation_layers = [
    layer for layer in aggregate_function_model.layers
    if isinstance(layer, tfl.layers.Aggregation)
]
tf.keras.utils.plot_model(
    aggregation_layers[0].model, show_layer_names=False, rankdir='LR')

png

āĻāĻ–āύ, āĻ…āĻ¨ā§āϝ āϕ⧋āύ āĻŽāϤ tf.keras.Model , āφāĻŽāϰāĻž āĻāĻŦāĻ‚ āĻ•āĻŽā§āĻĒāĻžāχāϞ āφāĻŽāĻžāĻĻ⧇āϰ āϤāĻĨā§āϝ āĻŽāĻĄā§‡āϞ āĻŽāĻžāĻĒāϏāχ āĻ•āϰāĻž āĻšāĻŦ⧇āĨ¤

aggregate_function_model.compile(
    loss='mae',
    optimizer=tf.keras.optimizers.Adam(LEARNING_RATE))
aggregate_function_model.fit(
    train_xs, train_ys, epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, verbose=False)
<tensorflow.python.keras.callbacks.History at 0x7fee7d3033c8>

āφāĻŽāĻžāĻĻ⧇āϰ āĻŽāĻĄā§‡āϞ āĻĒā§āϰāĻļāĻŋāĻ•ā§āώāϪ⧇āϰ āĻĒāϰ, āφāĻŽāϰāĻž āφāĻŽāĻžāĻĻ⧇āϰ āĻĒāϰ⧀āĻ•ā§āώāĻžāϰ āϏ⧇āĻŸā§‡ āĻāϟāĻŋ āĻŽā§‚āĻ˛ā§āϝāĻžāϝāĻŧāύ āĻ•āϰāϤ⧇ āĻĒāĻžāϰāĻŋāĨ¤

print('Test Set Evaluation...')
print(aggregate_function_model.evaluate(test_xs, test_ys))
Test Set Evaluation...
7/7 [==============================] - 2s 3ms/step - loss: 53.4633
53.4632682800293