Age | Commit message (Collapse) | Author |
|
Adjust this to take a device as a parameter instead of a node.
Signed-off-by: Simon Glass <sjg@chromium.org>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-on: Beaver, Jetson-TK1
Tested-by: Stephen Warren <swarren@nvidia.com>
|
|
Adjust this code to support a live device tree. This should be implemented
as a PHY driver but that is left as an exercise for the maintainer.
Signed-off-by: Simon Glass <sjg@chromium.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
|
|
The PMC can be modelled as a syscon peripheral. Add a driver for this
so that it can be accessed by drivers when needed. Enable it for tegra124
boards.
Signed-off-by: Simon Glass <sjg@chromium.org>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-on: Beaver, Jetson-TK1
Tested-by: Stephen Warren <swarren@nvidia.com>
|
|
At present early clock init happens in SPL. If SPL did not run (because
for example U-Boot is chain-loaded from another boot loader) then the
clocks are not set as U-Boot expects.
Add a function to detect this and call the early clock init in U-Boot
proper.
Signed-off-by: Simon Glass <sjg@chromium.org>
|
|
Introduce CONFIG_TEGRA124_MMC_DISABLE_EXT_LOOPBACK to disable the external clock
loopback and use the internal one on SDMMC3 as per the SDMMC_VENDOR_MISC_CNTRL_0
register's SDMMC_SPARE1 bits being set to 0xfffd according to the TRM.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Acked-by: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Marcel Ziswiler <marcel@ziswiler.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
A future patch will implement a clock uclass driver for Tegra. That driver
will call into Tegra's existing clock code to simplify the transition;
this avoids tieing the clock uclass patches into significant refactoring
of the existing custom clock API implementation.
Some of the Tegra clock APIs that manipulate peripheral clocks require
both the peripheral clock ID and parent clock ID to be passed in together.
However, the clock uclass API does not require any such "parent"
parameter, so the clock driver must determine this information itself.
This patch implements new Tegra- specific clock API
clock_get_periph_parent() for this purpose.
The new API is implemented in the core Tegra clock code rather than SoC-
specific clock code. The implementation uses various SoC-/clock-specific
data. That data is only available in SoC-specific clock code.
Consequently, two new internal APIs are added that enable the core clock
code to retrieve this information from the SoC-specific clock code. Due to
the structure of the Tegra clock code, this leads to some unfortunate code
duplication. However, this situation predates this patch.
Ideally, future work will de-duplicate the Tegra clock code, and migrate
it into drivers/clk/tegra. However, such refactoring is kept separate from
this series.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Currently, Tegra peripheral drivers control two aspects of their HW module
clock(s):
1) The clock enable/rate for the peripheral clock itself.
2) The system-level clock tree setup, i.e. the clock parent.
Aspect 1 is reasonable, but aspect 2 is a system-level decision, not
something that an individual peripheral driver should in general know
about or influence. Such system-level knowledge ties the driver to a
specific SoC implementation, even when they use generic APIs for clock
manipulation, since they must have SoC-specific knowledge such as parent
clock IDs. Limited exceptions exist, such as where peripheral HW is
expected to dynamically switch between clock sources at run-time, such
as CPU clock scaling or display clock conflict management in a multi-head
scenario.
This patch enhances the Tegra core code to perform system-level clock
tree setup, in a similar fashion to the Linux kernel Tegra clock driver.
This will allow future patches to simplify peripheral drivers by removing
the clock parent setup logic.
This change is required prior to converting peripheral drivers to use the
standard clock APIs, since:
1) The clock uclass doesn't currently support a set_parent() operation.
Adding one is possible, but not necessary at the moment.
2) The clock APIs retrieve all clock IDs from device tree, and the DT
bindings for almost all peripherals only includes information about the
relevant peripheral clocks, and not any potential parent clocks.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Convert the Tegra MMC driver to DM_MMC. Support for non-DM is removed
to avoid ifdefs in the code. DM_MMC is now enabled for all Tegra builds.
Cc: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
(swarren, fixed some NULL pointer dereferences, removed extraneous
changes, rebased on various other changes, removed non-DM support etc.)
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
|
struct mmc_host is a Tegra-specific structure, but the name implies it's
something defined by core MMC code, which is confusing. Rename it to
struct tegra_mmc_priv to make its purpose more obvious. The new name is
also more appropriate for a DM driver private data structure, which will
be relevant later in this series.
Nothing needs access to this type except the MMC driver itself. Move the
definition into the driver C file.
Make sure all Tegra MMC functions are named tegra_mmc_*. Even though
they're all static, it's useful to have good naming so that symbol tables
are easy to interpret. A few functions aren't renamed by this patch since
they'll be deleted by a subsequent patch in this series.
Cc: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
pad_init_mmc() is performing an SoC-specific operation, using registers
within the MMC controller. There's no reason to implement this code
outside the MMC driver, so move it inside the driver.
Cc: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Tegra186 supports the new standard clock and reset APIs. Older Tegra SoCs
still use custom APIs. Enhance the Tegra MMC driver so that it can operate
with either set of APIs.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
The Tegra BPMP (Boot and Power Management Processor) is a separate
auxiliary CPU embedded into Tegra to perform power management work, and
controls related features such as clocks, resets, power domains, PMIC I2C
bus, etc. This driver provides the core low-level communication path by
which feature-specific drivers (such as clock) can make requests to the
BPMP. This driver is similar to an MFD driver in the Linux kernel. It is
unconditionally selected by CONFIG_TEGRA186 since virtually any Tegra186
build of U-Boot will need the feature.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
IVC (Inter-VM Communication) protocol is a Tegra-specific IPC (Inter
Processor Communication) framework. Within the context of U-Boot, it is
typically used for communication between the main CPU and various
auxiliary processors. In particular, it will be used to communicate with
the BPMP (Boot and Power Management Processor) on Tegra186 in order to
manipulate clocks and reset signals.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Fix a number of typos, including:
* "compatble" -> "compatible"
* "eanbeld" -> "enabled"
* "envrionment" -> "environment"
* "FTD" -> "FDT" (for "flattened device tree")
* "ommitted" -> "omitted"
* "overriden" -> "overridden"
* "partiton" -> "partition"
* "propogate" -> "propagate"
* "resourse" -> "resource"
* "rest in piece" -> "rest in peace"
* "suport" -> "support"
* "varible" -> "variable"
Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca>
|
|
Tegra186's MMC controller needs to be explicitly identified. Add another
compatible value for it.
Tegra186 will use an entirely different clock/reset control mechanism to
existing chips, and will use standard clock/reset APIs rather than the
existing Tegra-specific custom APIs. The driver support for that isn't
ready yet, so simply disable all clock/reset usage if compiling for
Tegra186. This must happen at compile time rather than run-time since the
custom APIs won't even be compiled in on Tegra186. In the long term, the
plan would be to convert the existing custom APIs to standard APIs and get
rid of the ifdefs completely.
The system's main eMMC will work without any clock/reset support, since
the firmware will have already initialized the controller in order to
load U-Boot. Hence the driver is useful even in this apparently crippled
state.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
There are currently many places that define the list of all Tegra GPIOs;
the DT binding header and custom Tegra-specific header file gpio.h. Fix
the redundancy by replacing everything with the DT binding header file.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
In current Linux kernel Tegra DT files, 64-bit addresses are represented
in unit addresses as a pair of comma-separated 32-bit values. Apparently
this is no longer the correct representation for simple busses, and the
unit address should be represented as a single 64-bit value. If this is
changed in the DTs, arm/arm/mach-tegra/board2.c:ft_system_setup() will no
longer be able to find and enable the GPU node, since it looks up the node
by name.
Fix that function to enable nodes based on their compatible value rather
than their node name. This will work no matter what the node name is, i.e
for DTs both before and after any rename operation.
Cc: Thierry Reding <treding@nvidia.com>
Cc: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Remove the old PWM code. Remove calls to CONFIG_LCD functions now that we
are using driver model for video.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
There isn't a lot of benefit of have two separate files. With driver model
the code needs to be in the same driver, so it's better to have it in the
same file.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
This PWM supports four channels. The driver always uses the 32KHz clock,
and adjusts the duty cycle accordingly.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
In a number of places we had wordings of the GPL (or LGPL in a few
cases) license text that were split in such a way that it wasn't caught
previously. Convert all of these to the correct SPDX-License-Identifier
tag.
Signed-off-by: Tom Rini <trini@konsulko.com>
|
|
Rename GPU functions to less generic names to avoid potential name
collisions.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
There is no justification for this function, especially in exported
form.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
On currently supported SoCs, clk_m always runs at the same frequency as
the oscillator input. However newer SoC generations such as Tegra210 no
longer have that restriction. Prepare for that by separating clk_m from
the oscillator clock and allow SoC code to override the clk_m rate.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
This function is deleted by commit 2fccd2d96bad
"tegra: Convert tegra GPIO driver to use driver model".
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Acked-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
introduce BIT() definition, used in at91_udc gadget
driver.
Signed-off-by: Heiko Schocher <hs@denx.de>
[remove all other occurrences of BIT(x) definition]
Signed-off-by: Andreas Bießmann <andreas.devel@googlemail.com>
Acked-by: Stefan Roese <sr@denx.de>
Acked-by: Anatolij Gustschin <agust@denx.de>
|
|
This header file uses type definitions (u8, u32) from linux/types.h but
doesn't include it. If includes aren't carefully ordered this can cause
build failures.
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Add defines to allow reading recovery mode (RCM) boot type from the boot
information table (BIT) written by the boot ROM (BR) to the IRAM.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Acked-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
T124/210 requires some specific configuration (VPR setup) to
be performed by the bootloader before the GPU can be used.
For this reason, the GPU node in the device tree is disabled
by default. This patch enables the node if U-boot has performed
VPR configuration.
Boards enabled by this patch are T124's Jetson TK1 and Venice2
and T210's P2571.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
U-boot is responsible for enabling the GPU DT node after all necessary
configuration (VPR setup for T124) is performed. In order to be able to
check whether this configuration has been performed right before booting
the kernel, make it happen during board_init().
Also move VPR configuration into the more generic gpu.c file, which will
also host other GPU-related functions, and let boards specify
individually whether they need VPR setup or not.
Signed-off-by: Alexandre Courbot <acourbot@nvidia.com>
Cc: Stephen Warren <swarren@nvidia.com>
Cc: Tom Warren <twarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Added PLL variables (dividers mask/shift, lock enable/detect, etc.)
to new pllinfo struct for each Soc/PLL. PLLA/C/D/E/M/P/U/X.
Used pllinfo struct in all clock functions, validated on T210.
Should be equivalent to prior code on T124/114/30/20. Thanks
to Marcel Ziswiler for corrections to the T20/T30 values.
Signed-off-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Added 38.4MHz/48MHz entries to pll_x_table for CPU PLL. Needs
to be measured - should be close to 700MHz (1.4G/2).
Note that some freqs aren't in the PLLU table in T210 TRM
(13, 26MHz), so I used the 12MHz table entry for them. They
shouldn't be selected since they're not viable T210 OSC freqs.
Since there are now 2 new OSC defines, all tables (pll_x_table,
PLLU) had to increase by two entries, but since 38.4/48MHz are
not viable osc freqs on T20/30/114, etc, they're just set to 0.
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Derived from Tegra124, modified as appropriate during T210
board bringup. Cleaned up debug statements to conserve
string space, too. This also adds misc 64-bit changes
from Thierry Reding/Stephen Warren.
Signed-off-by: Tom Warren <twarren@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
|
Add a hook to allows boards to add their own init to board_init().
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Add a simple function to enable external clocks.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Will be used for unpowergating CPUs.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Tested-by: Ian Campbell <ijc@hellion.org.uk>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Add full link training as a fallback in case the fast link training
fails.
Signed-off-by: Simon Glass <sjg@chromium.org>
Acked-by: Anatolij Gustschin <agust@denx.de>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Allow this to be used by other Tegra SoCs.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Add functions to provide access to the display clocks on Tegra124 including
setting the clock rate for an EDP display.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Create a function which sets the source clock for a peripheral, given
the number of mux bits to adjust. This can then be used more generally.
For now, don't export it.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
The get_pll() function can do the wrong thing if passed values that are
out of range. Add checks for this and add a function which can return
a 'simple' PLL. This can be defined by SoCs with their own clocks.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Some LCDs require a PMIC to be set up - add a function for this.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Add a way of displaying a numeric board ID on start-up.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
When enabling a PWM, allow the existing clock rate and source to stand
unchanged.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
This is needed for tegra124 also, so make it common and add a header file
for tegra124.
Signed-off-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Some pinmux controls are in a different register set. Add support for
manipulating those in a similar way to existing pins/groups.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Move struct pmux_pingrp_desc type and tegra_soc_pingroups variable
declaration together with other pin/mux level definitions. Now the whole
file is grouped/ordered pin/mux-related then drvgrp-related definitions.
Fix typo in ifdef comment.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
Tegra210 has a per-pin option named e_io_hv, which indicates that the
pin's input path should be configured to be 3.3v-tolerant. Add support
for this.
Note that this is very similar to previous chip's rcv_sel option.
However, since the Tegra TRM names this option differently for the
different chips, we support the new name so that the code exactly matches
the naming in the TRM, to avoid confusion.
This patch incorporates a few fixes from Tom Warren <twarren@nvidia.com>.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
T210 support HSM and Schmitt options in the pinmux register (previous
chips placed these options in the drive group register). Update the
code to handle this.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|
|
On some future SoCs, some per-drive-group features became per-pin
features. Move all type definitions early in the header so they can
be enabled irrespective of the setting of TEGRA_PMX_SOC_HAS_DRVGRPS.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Tom Warren <twarren@nvidia.com>
|