You might be forgiven to assume that money is money and unless you’re part of the EU, then there is only one type of money and in the UK, the units are Pounds Sterling.
I believed this was the case for many years and it wasn’t until I tried to complete a transaction in a way that would close in my favour. In fact I recently came across this same issue recently and I will use both instances to elaborate on the fundamental difference between cash-money and credit-money.
After almost four years of paying 500 per month towards my car loan, I decided the timing of a balance transfer offer on my credit card was a good way to borrow an equal amount to what was outstanding, about 1500 and pay off the loan, that way I could officially take ownership of the car and sell it to finance my next car.
I took the balance transfer offer and poised with my credit card to settle the debt and it all became clear. This was not possible. The only way to pay the debt was with hot-cash. That is, either by bank payment of debit card – in other words by cash only. This resulted in having to suffer the last three months of the loan since I didn’t actually have hot-money of cash to clear it in one go.
I’m sure I had come across other types of transactions where a credit card could not be used. For example, in another motoring transaction, the sales person did say only up to 500 could be paid by credit card, the rest must be by debit card or bank payment.
When you think about it, it makes sense to take advantage of balance transfers, which reduce annual interest rates on borrowing to around 3.5%, at typical initial transfer fee rates.
This shows that there are built in measures to prevent account holders from using credit to suit the consumer. Instead a line of credit is available as long as we abide by rules on how we can spend it.
So, despite what looks like an opportunity to a line of credit at 3.5%, so that you can make a lump sum payment for something you want and that is totally legitimate, you are unable to.
If on the other hand I were to visit an Apple shop and kit myself and the rest of my family out with consumer goods such as smart phones, tablets, laptops etc., to the tune of the price of a car, that wouldn’t be a problem.
The moral of the story is therefore to be very careful how you pay for things. Don’t assume you have infinite credit and can buy what ever you like, just because your card company has increased your limit to 15k.
I hope to extend this journal about finance as many of us don’t understand how loans can effect our credit in a way that if you buy things on credit in the wrong order, or at the wrong time, you could make your credit seem worthless and find yourself in need of having to clear yourexisting loans before being eligible for more.
Power over Ethernet, PoE, IEEE 802.3af & 802.3at and
also called Power over LAN, PoL, is a useful way of powering equipment
that uses Ethernet connectivity
Power over Ethernet, PoE, sometimes also called Power over LAN, PoL
is defined under IEEE 802.3af and 802.3at and is a very convenient
method of powering remote Ethernet linked devices using the Ethernet
line.
Often small items like routers, hubs and other devices need power and
many times they each require a small power supply. To avoid the use of
these supplies, and also provide functionality where there may not be a
convenient power source, PoE, Power over Ethernet provides an ideal
solution.
Originally the concept was defined under IEEE 802.3af, but after its
initial introduction the standard was refined and released with many
enhancements as IEEE 802.3at.
PoE Development
With Ethernet now an established standard, one of the limitations of
Ethernet related equipment was that it required power and this was not
always easily available. As a result some manufacturers started to offer
solutions whereby power could be supplied over the Ethernet cables
themselves. To prevent a variety of incompatible Power over Ethernet,
PoE, solutions appearing on the market, and the resulting confusion, the
IEEE began their standardisation process in 1999.
A variety of companies were involved in the development of the IEEE
standard. The result was the IEEE802.3af standard that was approved for
release on 12 June 2003. Although some products were released before
this date and may not fully conform to the standard, most products
available today will conform to it, especially if they quote compliance
with 802.3af.
A further standard, designated IEEE 802.3at was released in 2009 and
this provided for several enhancements to the original IEEE 802.3af
specification.
PoE overview
The standard allows for a supply of 48 volts with a maximum current
of 400 milliamps to be provided over two of the available four pairs
used on Cat 3 or Cat 5 cable. While this sounds very useful with a
maximum available power of 19.2 watts, the losses in the system normally
reduce this to just under 13 watts.
The standard Cat 5 cable has sets of twisted pair cable, and the IEEE
standard allows for either to be used for 10Base-T and 100Base-T
systems. The standard allows for two options for Power over Ethernet:
one uses the spare twisted pairs, while the second option uses the wires
carrying the data. Only one option may be used and not both.
When using the spare twisted pairs for the supply, the pair on pins 4
and 5 connected together and normally used for the positive supply. The
pair connected to pins 7 and 8 of the connector are connected for the
negative supply. While this is the standard polarity, the specification
actually allows for either polarity to be used.
When the pairs used for carrying the data are employed it is it is
possible to apply DC power to the centre tap of the isolation
transformer that are used to terminate the data wires without disrupting
the data transfer. In this mode of operation the pair on pins 3 and 6
and the pair on pins 1 and 2 can be of either polarity.
As the supply reaching the powered device can be of either polarity a
full wave rectifier (bridge rectifier) is used to ensure that the
device consuming the power receives the correct polarity power.
Within the 802.3af standard two types of device are described:
Power Sourcing Equipment, PSE This is the equipment that supplies power to the Ethernet cable.
Powered Devices, PD This is equipment that interfaces to
the Ethernet cable and is powered by supply on the cable. These
equipments may range from switches and hubs to other items including
webcams, etc.
Power over Ethernet connections
It is useful to have the connections for the power on an Ethernet cable or conenctor for using PoE.
Ethernet Cable Pinout & Details
Pin No
Colour
Telephone
10Base-T
100Base-T
1000Base-T
PoE Mode A
PoE Mode B
1
White / green
+TX
+TD
+BI_DA
48 V out
2
Green
-TX
-TX
-BI_DA
48 V out
3
White / orange
+RX
+RX
+BI_DB
48 V return
4
Blue
Ring
+BI_DC
48 V out
5
Blue / white
Tip
-BI_DC
48 V out
6
Orange
-RX
-RX
-BI_DB
48 V return
7
White / brown
+BI_DD
48 V return
8
Brown
-BI_DD
48 V return
Power Sourcing Equipment, PSE
This needs to provide a number of functions apart from simply
supplying the power over the Ethernet system. The PSE obviously needs to
ensure that no damage is possible to any equipment that may be present
on the Ethernet system. The PSE first looks for devices that comply with
the IEEE 802.3af specification. This is achieved by applying a small
current-limited voltage to the cable. The PSE then checks for the
presence of a 25k ohm resistor in the remote device. If this load or
resistor is detected, then the 48V is applied to the cable, but it is
still current-limited to prevent damage to cables and equipment under
fault conditions.
The PSE will continue to supply power until the Powered Device (PD) is removed, or the PD stops drawing its minimum current.
Powered Device, PD
The powered device must be able to operate within the confines of the
Power over Ethernet specification. It receives a nominal 48 volts from
the cable, and must be able to accept power from either option, i.e.
either over the spare or data cables. Additionally the 48 volts supplied
is too high for operating the electronics to be powered, and
accordingly an isolated DC-DC converter is used to transform the 48V to a
lower voltage. This also enables 1500V isolation to be provided for
safety reasons.
Power over Ethernet, PoE, defined as IEEE 802.3af or the enhancements under IEEE 802.3at provide a particularly valuable means of remotely supplying and controlling equipment that may be connected to an Ethernet network or system. PoE enables units to be powered in situations where it may not be convenient to run in a new power supply for the unit. While there are limitations to the power that can be supplied, the intention is that only small units are likely to need powering in this way. Larger units can be powered using more conventional means.
Connectivity: Wireless & Wired
All the key topics associated with connectivity including mobile
telecommunications: 2G; 3G; 4G; 5G; Wi-Fi; Bluetooth; IoT
communications, Ethernet, USB, . . . everything you need to know.
Connectivity in both wired and wireless forms is part of everyday
life. From wired and fibre broadband to mobile communications – 2G, 3G,
4G and 5G, Wi-Fi, Bluetooth and many other wireless technologies through
to standards like Ethernet, USB and many others. Wi-Fi is particularly
important as demonstrated by the number of Wi-Fi routers, Wi-Fi
repeaters and the like that are available for sale.
This addresses a variety of topics associated with wireless
conenctivity. Everything from Wi-Fi, Wi-FI routers and repeaters, etc
through to other forms of wireless conenctivity including Bluetooth,
LoRa, NFC and many more. With the technology for Smart homes and cities
becoming more commonplace, these technologies are being used
increasingly.
Although wireless technologies like Wi-Fi are widely used, wired
connectivity is important. Ethernet is once such example as it is used
for many computer conenctions. Items like Ethernet cables and many more
can be found, although with other wired connectivity areas like USB,
serial communications and networking solutions like NFV and SDN.
In this article, we present what the author rates as the top eight open source machine learning frameworks.
Learning may be defined as the process of improving one’s ability to
perform a task efficiently. Machine learning is another sub-field of
computer science, which enables modern computers to learn without being
explicitly programmed. Machine learning has basically evolved from
artificial intelligence via pattern recognition and computational
learning theory. Machine learning explores the area of algorithms, which
can make high end predictions on data. In recent times, machine
learning has been deployed in a wide range of computing tasks, where
designing efficient algorithms and programs becomes rather difficult,
such as email spam filtering, optical character recognition, search
engine improvement, digital image processing, data mining, etc.
Tom M. Mitchell, renowned computer scientist and professor at Carnegie
Mellon University, USA, defined machine learning as: “A computer program
is said to learn from experience E with respect to some class of tasks T
and performance measure P, if its performance at tasks in T, as
measured by P, improves with experience E.”
Machine learning tasks are broadly classified into three categories,
depending on the nature of the learning ‘signal’ or ‘feedback’ available
to a learning system.
Supervised learning is regarded as a machine learning task
of inferring a function from labelled training data. In supervised
learning, each example is a pair consisting of an input object (vector)
and a desired output value (supervisory signal).
Unsupervised learning: This is regarded as the machine
learning task of inferring a function to describe hidden structures from
unlabelled data. It is closely related to the problem of density
estimation in statistics.
Reinforcement learning is an area of machine learning that
is linked to how software agents take actions in the environment so as
to maximise some notion of cumulative reward. It is applied to diverse
areas like game theory, information theory, swarm intelligence,
statistics and genetic algorithms. In machine learning, the environment
is formulated as a Markov decision process (MDP) due to dynamic
programming techniques.
The application of machine learning to diverse areas of computing is
gaining popularity rapidly, not only because of cheap and powerful
hardware, but also because of the increasing availability of free and
open source software, which enable machine learning to be implemented
easily. Machine learning practitioners and researchers, being a part of
the software engineering team, continuously build sophisticated
products, integrating intelligent algorithms with the final product to
make software work more reliably, quickly and without hassles.
There is a wide range of open source machine learning frameworks
available in the market, which enable machine learning engineers to
build, implement and maintain machine learning systems, generate new
projects and create new impactful machine learning systems.
Let’s take a look at some of the top open source machine learning frameworks available.
Advertisement
Apache Singa
The Singa Project was initiated by the DB System Group at the National
University of Singapore in 2014, with a primary focus on distributed
deep learning by partitioning the model and data onto nodes in a cluster
and parallelising the training. Apache Singa provides a simple
programming model and works across a cluster of machines. It is
primarily used in natural language processing (NLP) and image
recognition. A Singa prototype accepted by Apache Incubator in March
2015 provides a flexible architecture of scalable distributed training
and is extendable to run over a wide range of hardware.
Apache Singa was designed with an intuitive programming model based on
layer abstraction. A wide variety of popular deep learning models are
supported, such as feed-forward models like convolutional neural
networks (CNN), energy models like Restricted Boltzmann Machine (RBM),
and recurrent neural networks (RNN). Based on a flexible architecture,
Singa runs various synchronous, asynchronous and hybrid training
frameworks.
Singa’s software stack has three main components: Core, IO and Model.
The Core component is concerned with memory management and tensor
operations. IO contains classes for reading and writing data to the disk
and the network. Model includes data structures and algorithms for
machine learning models.
Its main features are:
Includes tensor abstraction for strong support for more advanced machine learning models
Supports device abstraction for running on varied hardware devices
Makes use of cmake for compilation rather than GNU autotool
Improvised Python binding and contains more deep learning models like VGG and ResNet
Includes enhanced IO classes for reading, writing, encoding and decoding files and data
Shogun
Shogun was initiated by Soeren Sonnenburg and Gunnar Raetsch in 1999 and
is currently under rapid development by a large team of programmers.
This free and open source toolbox written in C++ provides algorithms and
data structures for machine learning problems. Shogun Toolbox provides
the use of a toolbox via a unified interface from C++, Python, Octave,
R, Java, Lua and C++; and can run on Windows, Linux and even MacOS.
Shogun is designed for unified large-scale learning for a broad range of
feature types and learning settings, like classification, regression,
dimensionality reduction, clustering, etc. It contains a number of
exclusive state-of-art algorithms, such as a wealth of efficient SVM
implementations, multiple kernel learning, kernel hypothesis testing,
Krylov methods, etc.
Shogun supports bindings to other machine learning libraries like
LibSVM, LibLinear, SVMLight, LibOCAS, libqp, VowpalWabbit, Tapkee, SLEP,
GPML and many more.
Its features include one-time classification, multi-class
classification, regression, structured output learning, pre-processing,
built-in model selection strategies, visualisation and test frameworks;
and semi-supervised, multi-task and large scale learning.
The latest version is 4.1.0. Website: http://www.shogun-toolbox.org/
Apache Mahout
Apache Mahout, being a free and open source project of the Apache
Software Foundation, has a goal to develop free distributed or scalable
machine learning algorithms for diverse areas like collaborative
filtering, clustering and classification. Mahout provides Java libraries
and Java collections for various kinds of mathematical operations.
Apache Mahout is implemented on top of Apache Hadoop using the MapReduce
paradigm. Once Big Data is stored on the Hadoop Distributed File System
(HDFS), Mahout provides the data science tools to automatically find
meaningful patterns in these Big Data sets, turning this into ‘big
information’ quickly and easily.
Building a recommendation engine: Mahout provides tools for building a recommendation engine via the Taste library– a fast and flexible engine for CF.
Clustering with Mahout: Several clustering algorithms are supported by Mahout, like Canopy, k-Means, Mean-Shift, Dirichlet, etc.
Categorising content with Mahout: Mahout uses the simple Map-Reduce-enabled naïve Bayes classifier.
The latest version is 0.12.2. Website:https://mahout.apache.org/
Apache Spark MLlib
Apache Spark MLlib is a machine learning library, the primary objective
of which is to make practical machine learning scalable and easy. It
comprises common learning algorithms and utilities, including
classification, regression, clustering, collaborative filtering,
dimensionality reduction as well as lower-level optimisation primitives
and higher-level pipeline APIs.
Spark MLlib is regarded as a distributed machine learning framework on
top of the Spark Core which, mainly due to the distributed memory-based
Spark architecture, is almost nine times as fast as the disk-based
implementation used by Apache Mahout.
The various common machine learning and statistical algorithms that have been implemented and included with MLlib are:
Summary statistics, correlations, hypothesis testing, random data generation
Classification and regression: Supports vector machines, logistic regression, linear regression, naïve Bayes classification
Collaborative filtering techniques including Alternating Least Squares (ALS)
Cluster analysis methods including k-means and Latent Dirichlet Allocation (LDA)
Optimisation algorithms such as stochastic gradient descent and limited-memory BGGS
The latest version is 2.0.1. Website:http://spark.apache.org/mllib/
TensorFlow
TensorFlow is an open source software library for machine learning
developed by the Google Brain Team for various sorts of perceptual and
language understanding tasks, and to conduct sophisticated research on
machine learning and deep neural networks. It is Google Brain’s second
generation machine learning system and can run on multiple CPUs and
GPUs. TensorFlow is deployed in various products of Google like speech
recognition, Gmail, Google Photos and even Search.
TensorFlow performs numerical computations using data flow graphs. These
elaborate the mathematical computations with a directed graph of nodes
and edges. Nodes implement mathematical operations and can also
represent endpoints to feed in data, push out results or read/write
persistent variables. Edges describe the input/output relationships between nodes. Data edges carry dynamically-sized multi-dimensional data arrays or tensors.
Its features are listed below.
Highly flexible: TensorFlow enables users to write their
own higher-level libraries on top of it by using C++ and Python, and
express the neural network computation as a data flow graph.
Portable: It can run on varied CPUs or GPUs, and even on mobile computing platforms. It also supports Docker and running via the cloud.
Auto-differentiation: TensorFlow enables the user to define
the computational architecture of predictive models combined with
objective functions, and can handle complex computations.
Diverse language options: It has an easy Python based interface and enables users to write code, and see visualisations and data flow graphs.
The latest version is 0.10.0. Website: www.tensorflow.org
Oryx 2
Oryx 2 is a realisation of Lambda architecture built on Apache Spark and
Apache Kafka for real-time large scale machine learning. It is designed
for building applications and includes packaged, end-to-end
applications for collaborative filtering, classification, regression and
clustering.
Oryx 2 comprises the following three tiers.
General Lambda architecture tier: Provides batch, speed and serving layers, which are not specific to machine learning.
Specialisation on top which, in turn, provides machine learning abstraction to hyperparameter selection, etc.
End-to-end implementation of the same standard machine learning
algorithms as an application (ALS, random decision forests, k-means) on
top.
Oryx 2 consists of the following layers of Lambda architecture as well as connecting elements.
Batch layer: Used for computing new results from historical data and previous results.
Speed layer: Produces and publishes incremental model updates from a stream of new data.
Serving layer: Receives models and updates, and implements a synchronous API, exposing query operations on results.
Data transport layer: Moves data between layers and takes input from external sources.
The latest version is 2.2.1. Website: http://oryx.io/
Accord.NET
Accord.NET is a .NET open source machine learning framework for
scientific computing, and consists of multiple libraries for diverse
applications like statistical data processing, pattern recognition,
linear algebra, artificial neural networks, image and signal processing,
etc.
The framework is divided into libraries via the installer, compressed
archives and NuGet packages, which include Accord.Math,
Accord.Statistics, Accord.MachineLearning, Accord.Neuro, Accord.Imaging,
Accord.Audio, Accord.Vision, Accord.Controls, Accord.Controls.Imaging,
Accord.Controls.Audio, Accord.Controls.Vision, etc.
Its features are:
Matrix library for an increase in code reusability, and gradual change of existing algorithms over standard .NET structures.
Consists of more than 40 different statistical distributions like hidden Markov models and mixture models.
Consists of more than 30 hypothesis tests like ANOVA, two-sample, multiple-sample, etc.
Consists of more than 38 kernel functions like KVM, KPC and KDA.
Amazon Machine Learning (AML) is a machine learning service for
developers. It has many visualisation tools and wizards for creating
high-end sophisticated and intelligent machine learning models without
any need to learn complex ML algorithms and technologies. Via AML,
predictions for applications can be obtained using simple APIs without
using custom prediction generation code or complex infrastructure.
AML is based on simple, scalable, dynamic and flexible ML technology
used by Amazon’s ‘Internal Scientists’ community professionals to create
Amazon Cloud Services. AML connects to data stored in Amazon S3,
Redshift or RDS, and can run binary classification, multi-class
categorisation or regression on this data to create models.
The key contents used in Amazon ML are listed below.
Datasources: Contain metadata associated with data inputs to Amazon ML.
ML models: Generate predictions using the patterns extracted from the input data.
Evaluations: Measure the quality of ML models.
Batch predictions asynchronously generate predictions for multiple input data observations.
Real-time predictions synchronously generate predictions for individual data observations.
Its key features are:
Supports multiple data sources within its system.
Allows users to create a data source object from data residing in Amazon Redshift – the data warehouse Platform as a Service.
Allows users to create a data source object from data stored in the MySQL database.
Supports three types of models: binary classification, multi-class classification and regression.
A
comprehensive strategy to promote long-term peace and stability in
Syria can be achieved through Turkey’s leadership, the president of
Turkey writes.
By Recep Tayyip Erdogan
Mr. Erdogan is the president of Turkey.
President
Trump made the right call to withdraw from Syria. The United States
withdrawal, however, must be planned carefully and performed in
cooperation with the right partners to protect the interests of the
United States, the international community and the Syrian people.
Turkey, which has NATO’s second largest standing army, is the only
country with the power and commitment to perform that task.
In 2016, Turkey became the first country to deploy ground combat troops to fight the so-called Islamic State in Syria.
Our military incursion severed the group’s access to NATO’s borders and
impeded their ability to carry out terror attacks in Turkey and Europe. President Recep Tayyip Erdogan of Turkey delivering a speech in Ankara, Turkey, last month.CreditAdem Altan/Agence France-Presse — Getty Images
Unlike coalition operations in Raqqa and Mosul, which relied heavily on airstrikes
that were carried out with little or no regard for civilian casualties,
Turkish troops and fighters of the Free Syrian Army went door to door
to root out insurgents in Al Bab, a former stronghold of the so-called
Islamic State.
Our approach left the
city’s core infrastructure largely intact and made it possible for life
to return to normal within days. Today, children are back at school, a Turkish-funded hospital
treats the sick, and new business projects create jobs and bolster the
local economy. This stable environment is the only cure for terrorism.
Turkey
is committed to defeating the so-called Islamic State and other
terrorist groups in Syria, because the Turkish people are all too
familiar with the threat of violent extremism. In 2003, when I became
prime minister, coordinated attacks by Al Qaeda claimed dozens of lives in Turkey.
More recently, the so-called Islamic State terrorists targeted our citizens,
our way of life and the inclusive, moderate worldview that our
civilization represents. A few years back, the terrorist group called me
“treacherous Satan.” We saw the horror in the faces of thousands of Christians and Yazidis, who sought refuge in Turkey when these terrorists came for them in Syria and Iraq.
I
say this again: There will be no victory for the terrorists. Turkey
will continue to do what it must to ensure its own safety and the
well-being of the international community.
Militarily
speaking, the so-called Islamic State has been defeated in Syria. Yet
we are deeply concerned that some outside powers may use the
organization’s remnants as an excuse to meddle in Syria’s internal
affairs.
A military victory against the terrorist group is a mere first step. The lesson of Iraq, where this terrorist group was born, is that premature declarations of victory and the reckless actions they tend to spur create more problems than they solve. The international community cannot afford to make the same mistake again today.
Turkey
proposes a comprehensive strategy to eliminate the root causes of
radicalization. We want to ensure that citizens do not feel disconnected
from government, terrorist groups do not get to prey on the grievances
of local communities and ordinary people can count on a stable future.
The
first step is to create a stabilization force featuring fighters from
all parts of Syrian society. Only a diverse body can serve all Syrian
citizens and bring law and order to various parts of the country. In
this sense, I would like to point out that we have no argument with the
Syrian Kurds.
Under wartime conditions, many young Syrians had no choice but to join the P.Y.D./Y.P.G., the Syrian branch of the P.K.K.,
that Turkey and the United States consider a terrorist organization.
According to Human Rights Watch, the Y.P.G. militants have violated
international law by recruiting children.
Following
the United States withdrawal from Syria, we will complete an intensive
vetting process to reunite child soldiers with their families and
include all fighters with no links to terrorist organizations in the new
stabilization force.
Ensuring
adequate political representation for all communities is another
priority. Under Turkey’s watch, the Syrian territories that are under
the control of the Y.P.G. or the so-called Islamic State will be
governed by popularly elected councils. Individuals with no links to
terrorist groups will be eligible to represent their communities in
local governments.
Local councils in
predominantly Kurdish parts of northern Syria will largely consist of
the Kurdish community’s representatives whilst ensuring that all other
groups enjoy fair political representation. Turkish officials with
relevant experience will advise them on municipal affairs, education,
health care and emergency services.
Turkey
intends to cooperate and coordinate our actions with our friends and
allies. We have been closely involved in the Geneva and Astana
processes, and are the sole stakeholder that can work simultaneously
with the United States and Russia. We will build on those partnerships
to get the job done in Syria.
It
is time for all stakeholders to join forces to end the terror unleashed
by the Islamic State, an enemy of Islam and Muslims around the world,
and to preserve Syria’s territorial integrity. Turkey is volunteering to
shoulder this heavy burden at a critical time in history. We are
counting on the international community to stand with us.